00:00:00.001 Started by upstream project "autotest-per-patch" build number 132853 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.112 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.117 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.151 Fetching changes from the remote Git repository 00:00:00.152 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.189 Using shallow fetch with depth 1 00:00:00.189 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.189 > git --version # timeout=10 00:00:00.231 > git --version # 'git version 2.39.2' 00:00:00.231 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.255 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.255 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.108 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.117 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.127 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.127 > git config core.sparsecheckout # timeout=10 00:00:06.138 > git read-tree -mu HEAD # timeout=10 00:00:06.152 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.178 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.178 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.279 [Pipeline] Start of Pipeline 00:00:06.291 [Pipeline] library 00:00:06.292 Loading library shm_lib@master 00:00:06.292 Library shm_lib@master is cached. Copying from home. 00:00:06.308 [Pipeline] node 00:00:06.317 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.319 [Pipeline] { 00:00:06.326 [Pipeline] catchError 00:00:06.327 [Pipeline] { 00:00:06.337 [Pipeline] wrap 00:00:06.345 [Pipeline] { 00:00:06.352 [Pipeline] stage 00:00:06.353 [Pipeline] { (Prologue) 00:00:06.553 [Pipeline] sh 00:00:06.836 + logger -p user.info -t JENKINS-CI 00:00:06.855 [Pipeline] echo 00:00:06.857 Node: WFP4 00:00:06.866 [Pipeline] sh 00:00:07.166 [Pipeline] setCustomBuildProperty 00:00:07.175 [Pipeline] echo 00:00:07.176 Cleanup processes 00:00:07.181 [Pipeline] sh 00:00:07.458 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.458 1241129 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.470 [Pipeline] sh 00:00:07.755 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.755 ++ grep -v 'sudo pgrep' 00:00:07.755 ++ awk '{print $1}' 00:00:07.755 + sudo kill -9 00:00:07.755 + true 00:00:07.767 [Pipeline] cleanWs 00:00:07.773 [WS-CLEANUP] Deleting project workspace... 00:00:07.773 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.779 [WS-CLEANUP] done 00:00:07.782 [Pipeline] setCustomBuildProperty 00:00:07.791 [Pipeline] sh 00:00:08.068 + sudo git config --global --replace-all safe.directory '*' 00:00:08.173 [Pipeline] httpRequest 00:00:08.556 [Pipeline] echo 00:00:08.558 Sorcerer 10.211.164.20 is alive 00:00:08.568 [Pipeline] retry 00:00:08.571 [Pipeline] { 00:00:08.586 [Pipeline] httpRequest 00:00:08.591 HttpMethod: GET 00:00:08.592 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.593 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.610 Response Code: HTTP/1.1 200 OK 00:00:08.611 Success: Status code 200 is in the accepted range: 200,404 00:00:08.611 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.330 [Pipeline] } 00:00:13.348 [Pipeline] // retry 00:00:13.356 [Pipeline] sh 00:00:13.641 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.657 [Pipeline] httpRequest 00:00:14.083 [Pipeline] echo 00:00:14.085 Sorcerer 10.211.164.20 is alive 00:00:14.094 [Pipeline] retry 00:00:14.096 [Pipeline] { 00:00:14.112 [Pipeline] httpRequest 00:00:14.117 HttpMethod: GET 00:00:14.117 URL: http://10.211.164.20/packages/spdk_b9cf2755988384073666302a3234e53031e50ddf.tar.gz 00:00:14.118 Sending request to url: http://10.211.164.20/packages/spdk_b9cf2755988384073666302a3234e53031e50ddf.tar.gz 00:00:14.135 Response Code: HTTP/1.1 200 OK 00:00:14.135 Success: Status code 200 is in the accepted range: 200,404 00:00:14.136 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b9cf2755988384073666302a3234e53031e50ddf.tar.gz 00:00:57.952 [Pipeline] } 00:00:57.972 [Pipeline] // retry 00:00:57.981 [Pipeline] sh 00:00:58.270 + tar --no-same-owner -xf spdk_b9cf2755988384073666302a3234e53031e50ddf.tar.gz 00:01:00.823 [Pipeline] sh 00:01:01.110 + git -C spdk log --oneline -n5 00:01:01.110 b9cf27559 script/rpc.py: Put python library fisrt in library path 00:01:01.110 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:01.110 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:01.110 66289a6db build: use VERSION file for storing version 00:01:01.110 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:01.122 [Pipeline] } 00:01:01.136 [Pipeline] // stage 00:01:01.147 [Pipeline] stage 00:01:01.149 [Pipeline] { (Prepare) 00:01:01.234 [Pipeline] writeFile 00:01:01.244 [Pipeline] sh 00:01:01.524 + logger -p user.info -t JENKINS-CI 00:01:01.536 [Pipeline] sh 00:01:01.820 + logger -p user.info -t JENKINS-CI 00:01:01.835 [Pipeline] sh 00:01:02.120 + cat autorun-spdk.conf 00:01:02.120 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.120 SPDK_TEST_NVMF=1 00:01:02.120 SPDK_TEST_NVME_CLI=1 00:01:02.120 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.120 SPDK_TEST_NVMF_NICS=e810 00:01:02.120 SPDK_TEST_VFIOUSER=1 00:01:02.120 SPDK_RUN_UBSAN=1 00:01:02.120 NET_TYPE=phy 00:01:02.127 RUN_NIGHTLY=0 00:01:02.132 [Pipeline] readFile 00:01:02.156 [Pipeline] withEnv 00:01:02.158 [Pipeline] { 00:01:02.171 [Pipeline] sh 00:01:02.464 + set -ex 00:01:02.464 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:02.464 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.464 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.464 ++ SPDK_TEST_NVMF=1 00:01:02.464 ++ SPDK_TEST_NVME_CLI=1 00:01:02.464 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.464 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.464 ++ SPDK_TEST_VFIOUSER=1 00:01:02.464 ++ SPDK_RUN_UBSAN=1 00:01:02.464 ++ NET_TYPE=phy 00:01:02.464 ++ RUN_NIGHTLY=0 00:01:02.464 + case $SPDK_TEST_NVMF_NICS in 00:01:02.464 + DRIVERS=ice 00:01:02.464 + [[ tcp == \r\d\m\a ]] 00:01:02.464 + [[ -n ice ]] 00:01:02.464 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:02.464 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:02.464 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:02.464 rmmod: ERROR: Module i40iw is not currently loaded 00:01:02.464 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:02.464 + true 00:01:02.464 + for D in $DRIVERS 00:01:02.464 + sudo modprobe ice 00:01:02.464 + exit 0 00:01:02.473 [Pipeline] } 00:01:02.488 [Pipeline] // withEnv 00:01:02.493 [Pipeline] } 00:01:02.507 [Pipeline] // stage 00:01:02.516 [Pipeline] catchError 00:01:02.518 [Pipeline] { 00:01:02.532 [Pipeline] timeout 00:01:02.532 Timeout set to expire in 1 hr 0 min 00:01:02.534 [Pipeline] { 00:01:02.548 [Pipeline] stage 00:01:02.550 [Pipeline] { (Tests) 00:01:02.565 [Pipeline] sh 00:01:02.850 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.850 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.850 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.850 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:02.850 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.850 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.850 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:02.850 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.850 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.850 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.850 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:02.850 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.850 + source /etc/os-release 00:01:02.850 ++ NAME='Fedora Linux' 00:01:02.850 ++ VERSION='39 (Cloud Edition)' 00:01:02.850 ++ ID=fedora 00:01:02.850 ++ VERSION_ID=39 00:01:02.850 ++ VERSION_CODENAME= 00:01:02.850 ++ PLATFORM_ID=platform:f39 00:01:02.850 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:02.850 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:02.850 ++ LOGO=fedora-logo-icon 00:01:02.850 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:02.850 ++ HOME_URL=https://fedoraproject.org/ 00:01:02.850 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:02.850 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:02.850 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:02.850 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:02.850 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:02.850 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:02.850 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:02.850 ++ SUPPORT_END=2024-11-12 00:01:02.850 ++ VARIANT='Cloud Edition' 00:01:02.850 ++ VARIANT_ID=cloud 00:01:02.850 + uname -a 00:01:02.850 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:02.850 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:05.385 Hugepages 00:01:05.385 node hugesize free / total 00:01:05.385 node0 1048576kB 0 / 0 00:01:05.385 node0 2048kB 0 / 0 00:01:05.385 node1 1048576kB 0 / 0 00:01:05.385 node1 2048kB 0 / 0 00:01:05.385 00:01:05.385 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:05.385 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:05.385 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:05.385 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:05.385 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:05.385 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:05.385 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:05.385 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:05.385 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:05.385 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:05.385 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:05.385 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:05.385 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:05.385 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:05.385 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:05.385 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:05.385 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:05.385 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:05.385 + rm -f /tmp/spdk-ld-path 00:01:05.385 + source autorun-spdk.conf 00:01:05.385 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.385 ++ SPDK_TEST_NVMF=1 00:01:05.385 ++ SPDK_TEST_NVME_CLI=1 00:01:05.385 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.385 ++ SPDK_TEST_NVMF_NICS=e810 00:01:05.385 ++ SPDK_TEST_VFIOUSER=1 00:01:05.385 ++ SPDK_RUN_UBSAN=1 00:01:05.385 ++ NET_TYPE=phy 00:01:05.385 ++ RUN_NIGHTLY=0 00:01:05.385 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:05.385 + [[ -n '' ]] 00:01:05.385 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.385 + for M in /var/spdk/build-*-manifest.txt 00:01:05.385 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:05.385 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:05.385 + for M in /var/spdk/build-*-manifest.txt 00:01:05.385 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:05.385 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:05.385 + for M in /var/spdk/build-*-manifest.txt 00:01:05.385 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:05.385 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:05.385 ++ uname 00:01:05.385 + [[ Linux == \L\i\n\u\x ]] 00:01:05.385 + sudo dmesg -T 00:01:05.645 + sudo dmesg --clear 00:01:05.645 + dmesg_pid=1242055 00:01:05.645 + [[ Fedora Linux == FreeBSD ]] 00:01:05.645 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.645 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.645 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:05.645 + [[ -x /usr/src/fio-static/fio ]] 00:01:05.645 + export FIO_BIN=/usr/src/fio-static/fio 00:01:05.645 + FIO_BIN=/usr/src/fio-static/fio 00:01:05.645 + sudo dmesg -Tw 00:01:05.645 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:05.645 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:05.645 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:05.646 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.646 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.646 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:05.646 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.646 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.646 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.646 10:14:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:05.646 10:14:39 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.646 10:14:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.646 10:14:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:05.646 10:14:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:05.646 10:14:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.646 10:14:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:05.646 10:14:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:05.646 10:14:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:05.646 10:14:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:05.646 10:14:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:05.646 10:14:39 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:05.646 10:14:39 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.646 10:14:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:05.646 10:14:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:05.646 10:14:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:05.646 10:14:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:05.646 10:14:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:05.646 10:14:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:05.646 10:14:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.646 10:14:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.646 10:14:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.646 10:14:39 -- paths/export.sh@5 -- $ export PATH 00:01:05.646 10:14:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.646 10:14:39 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:05.646 10:14:39 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:05.646 10:14:39 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733994879.XXXXXX 00:01:05.646 10:14:39 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733994879.c6Bzlc 00:01:05.646 10:14:39 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:05.646 10:14:39 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:05.646 10:14:39 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:05.646 10:14:39 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:05.646 10:14:39 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:05.646 10:14:39 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:05.646 10:14:39 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:05.646 10:14:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.646 10:14:39 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:05.646 10:14:39 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:05.646 10:14:39 -- pm/common@17 -- $ local monitor 00:01:05.646 10:14:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.646 10:14:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.646 10:14:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.646 10:14:39 -- pm/common@21 -- $ date +%s 00:01:05.646 10:14:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.646 10:14:39 -- pm/common@21 -- $ date +%s 00:01:05.646 10:14:39 -- pm/common@25 -- $ sleep 1 00:01:05.646 10:14:39 -- pm/common@21 -- $ date +%s 00:01:05.646 10:14:39 -- pm/common@21 -- $ date +%s 00:01:05.646 10:14:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733994879 00:01:05.646 10:14:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733994879 00:01:05.646 10:14:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733994879 00:01:05.646 10:14:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733994879 00:01:05.646 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733994879_collect-cpu-load.pm.log 00:01:05.646 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733994879_collect-vmstat.pm.log 00:01:05.646 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733994879_collect-cpu-temp.pm.log 00:01:05.906 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733994879_collect-bmc-pm.bmc.pm.log 00:01:06.843 10:14:40 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:06.843 10:14:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:06.843 10:14:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:06.843 10:14:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.843 10:14:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:06.843 Thu Dec 12 09:14:40 AM UTC 2024 00:01:06.843 10:14:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:06.843 v25.01-rc1-2-gb9cf27559 00:01:06.843 10:14:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:06.843 10:14:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:06.843 10:14:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:06.843 10:14:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:06.843 10:14:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:06.843 10:14:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.843 ************************************ 00:01:06.843 START TEST ubsan 00:01:06.843 ************************************ 00:01:06.843 10:14:40 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:06.843 using ubsan 00:01:06.843 00:01:06.843 real 0m0.000s 00:01:06.843 user 0m0.000s 00:01:06.843 sys 0m0.000s 00:01:06.843 10:14:40 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:06.843 10:14:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:06.843 ************************************ 00:01:06.843 END TEST ubsan 00:01:06.843 ************************************ 00:01:06.843 10:14:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:06.843 10:14:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:06.843 10:14:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:06.843 10:14:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:06.843 10:14:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:06.843 10:14:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:06.843 10:14:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:06.843 10:14:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:06.843 10:14:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:06.843 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:06.843 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:07.410 Using 'verbs' RDMA provider 00:01:20.204 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:32.414 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:32.414 Creating mk/config.mk...done. 00:01:32.414 Creating mk/cc.flags.mk...done. 00:01:32.414 Type 'make' to build. 00:01:32.414 10:15:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:32.414 10:15:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:32.414 10:15:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.414 10:15:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.414 ************************************ 00:01:32.414 START TEST make 00:01:32.414 ************************************ 00:01:32.414 10:15:06 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:34.324 The Meson build system 00:01:34.324 Version: 1.5.0 00:01:34.324 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:34.324 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.324 Build type: native build 00:01:34.324 Project name: libvfio-user 00:01:34.324 Project version: 0.0.1 00:01:34.324 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:34.324 C linker for the host machine: cc ld.bfd 2.40-14 00:01:34.324 Host machine cpu family: x86_64 00:01:34.324 Host machine cpu: x86_64 00:01:34.324 Run-time dependency threads found: YES 00:01:34.324 Library dl found: YES 00:01:34.324 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:34.324 Run-time dependency json-c found: YES 0.17 00:01:34.324 Run-time dependency cmocka found: YES 1.1.7 00:01:34.324 Program pytest-3 found: NO 00:01:34.324 Program flake8 found: NO 00:01:34.324 Program misspell-fixer found: NO 00:01:34.324 Program restructuredtext-lint found: NO 00:01:34.324 Program valgrind found: YES (/usr/bin/valgrind) 00:01:34.324 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.324 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.324 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.324 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:34.324 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:34.324 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:34.324 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:34.324 Build targets in project: 8 00:01:34.324 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:34.324 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:34.324 00:01:34.324 libvfio-user 0.0.1 00:01:34.324 00:01:34.324 User defined options 00:01:34.324 buildtype : debug 00:01:34.324 default_library: shared 00:01:34.324 libdir : /usr/local/lib 00:01:34.324 00:01:34.324 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.891 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:34.891 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:34.891 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:34.891 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:34.891 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:34.891 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:34.891 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:34.891 [7/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:34.891 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:34.891 [9/37] Compiling C object samples/null.p/null.c.o 00:01:34.891 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:34.891 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:34.891 [12/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:34.891 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:34.891 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:34.891 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:34.891 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:34.891 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:34.891 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:34.891 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:34.891 [20/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:34.891 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:34.891 [22/37] Compiling C object samples/server.p/server.c.o 00:01:34.891 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:34.891 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:34.891 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:35.150 [26/37] Compiling C object samples/client.p/client.c.o 00:01:35.150 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:35.150 [28/37] Linking target samples/client 00:01:35.150 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:35.150 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:35.150 [31/37] Linking target test/unit_tests 00:01:35.150 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:35.150 [33/37] Linking target samples/gpio-pci-idio-16 00:01:35.150 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:35.150 [35/37] Linking target samples/lspci 00:01:35.150 [36/37] Linking target samples/server 00:01:35.150 [37/37] Linking target samples/null 00:01:35.409 INFO: autodetecting backend as ninja 00:01:35.409 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.409 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.667 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:35.667 ninja: no work to do. 00:01:40.939 The Meson build system 00:01:40.939 Version: 1.5.0 00:01:40.939 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:40.939 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:40.939 Build type: native build 00:01:40.939 Program cat found: YES (/usr/bin/cat) 00:01:40.939 Project name: DPDK 00:01:40.939 Project version: 24.03.0 00:01:40.939 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:40.939 C linker for the host machine: cc ld.bfd 2.40-14 00:01:40.939 Host machine cpu family: x86_64 00:01:40.939 Host machine cpu: x86_64 00:01:40.939 Message: ## Building in Developer Mode ## 00:01:40.939 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:40.939 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:40.939 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:40.939 Program python3 found: YES (/usr/bin/python3) 00:01:40.939 Program cat found: YES (/usr/bin/cat) 00:01:40.939 Compiler for C supports arguments -march=native: YES 00:01:40.939 Checking for size of "void *" : 8 00:01:40.939 Checking for size of "void *" : 8 (cached) 00:01:40.939 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:40.939 Library m found: YES 00:01:40.939 Library numa found: YES 00:01:40.939 Has header "numaif.h" : YES 00:01:40.939 Library fdt found: NO 00:01:40.939 Library execinfo found: NO 00:01:40.939 Has header "execinfo.h" : YES 00:01:40.939 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:40.939 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:40.940 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:40.940 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:40.940 Run-time dependency openssl found: YES 3.1.1 00:01:40.940 Run-time dependency libpcap found: YES 1.10.4 00:01:40.940 Has header "pcap.h" with dependency libpcap: YES 00:01:40.940 Compiler for C supports arguments -Wcast-qual: YES 00:01:40.940 Compiler for C supports arguments -Wdeprecated: YES 00:01:40.940 Compiler for C supports arguments -Wformat: YES 00:01:40.940 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:40.940 Compiler for C supports arguments -Wformat-security: NO 00:01:40.940 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.940 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:40.940 Compiler for C supports arguments -Wnested-externs: YES 00:01:40.940 Compiler for C supports arguments -Wold-style-definition: YES 00:01:40.940 Compiler for C supports arguments -Wpointer-arith: YES 00:01:40.940 Compiler for C supports arguments -Wsign-compare: YES 00:01:40.940 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:40.940 Compiler for C supports arguments -Wundef: YES 00:01:40.940 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.940 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:40.940 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:40.940 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.940 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:40.940 Program objdump found: YES (/usr/bin/objdump) 00:01:40.940 Compiler for C supports arguments -mavx512f: YES 00:01:40.940 Checking if "AVX512 checking" compiles: YES 00:01:40.940 Fetching value of define "__SSE4_2__" : 1 00:01:40.940 Fetching value of define "__AES__" : 1 00:01:40.940 Fetching value of define "__AVX__" : 1 00:01:40.940 Fetching value of define "__AVX2__" : 1 00:01:40.940 Fetching value of define "__AVX512BW__" : 1 00:01:40.940 Fetching value of define "__AVX512CD__" : 1 00:01:40.940 Fetching value of define "__AVX512DQ__" : 1 00:01:40.940 Fetching value of define "__AVX512F__" : 1 00:01:40.940 Fetching value of define "__AVX512VL__" : 1 00:01:40.940 Fetching value of define "__PCLMUL__" : 1 00:01:40.940 Fetching value of define "__RDRND__" : 1 00:01:40.940 Fetching value of define "__RDSEED__" : 1 00:01:40.940 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:40.940 Fetching value of define "__znver1__" : (undefined) 00:01:40.940 Fetching value of define "__znver2__" : (undefined) 00:01:40.940 Fetching value of define "__znver3__" : (undefined) 00:01:40.940 Fetching value of define "__znver4__" : (undefined) 00:01:40.940 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:40.940 Message: lib/log: Defining dependency "log" 00:01:40.940 Message: lib/kvargs: Defining dependency "kvargs" 00:01:40.940 Message: lib/telemetry: Defining dependency "telemetry" 00:01:40.940 Checking for function "getentropy" : NO 00:01:40.940 Message: lib/eal: Defining dependency "eal" 00:01:40.940 Message: lib/ring: Defining dependency "ring" 00:01:40.940 Message: lib/rcu: Defining dependency "rcu" 00:01:40.940 Message: lib/mempool: Defining dependency "mempool" 00:01:40.940 Message: lib/mbuf: Defining dependency "mbuf" 00:01:40.940 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:40.940 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.940 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:40.940 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:40.940 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:40.940 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:40.940 Compiler for C supports arguments -mpclmul: YES 00:01:40.940 Compiler for C supports arguments -maes: YES 00:01:40.940 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.940 Compiler for C supports arguments -mavx512bw: YES 00:01:40.940 Compiler for C supports arguments -mavx512dq: YES 00:01:40.940 Compiler for C supports arguments -mavx512vl: YES 00:01:40.940 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:40.940 Compiler for C supports arguments -mavx2: YES 00:01:40.940 Compiler for C supports arguments -mavx: YES 00:01:40.940 Message: lib/net: Defining dependency "net" 00:01:40.940 Message: lib/meter: Defining dependency "meter" 00:01:40.940 Message: lib/ethdev: Defining dependency "ethdev" 00:01:40.940 Message: lib/pci: Defining dependency "pci" 00:01:40.940 Message: lib/cmdline: Defining dependency "cmdline" 00:01:40.940 Message: lib/hash: Defining dependency "hash" 00:01:40.940 Message: lib/timer: Defining dependency "timer" 00:01:40.940 Message: lib/compressdev: Defining dependency "compressdev" 00:01:40.940 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:40.940 Message: lib/dmadev: Defining dependency "dmadev" 00:01:40.940 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:40.940 Message: lib/power: Defining dependency "power" 00:01:40.940 Message: lib/reorder: Defining dependency "reorder" 00:01:40.940 Message: lib/security: Defining dependency "security" 00:01:40.940 Has header "linux/userfaultfd.h" : YES 00:01:40.940 Has header "linux/vduse.h" : YES 00:01:40.940 Message: lib/vhost: Defining dependency "vhost" 00:01:40.940 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:40.940 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:40.940 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:40.940 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:40.940 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:40.940 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:40.940 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:40.940 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:40.940 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:40.940 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:40.940 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:40.940 Configuring doxy-api-html.conf using configuration 00:01:40.940 Configuring doxy-api-man.conf using configuration 00:01:40.940 Program mandb found: YES (/usr/bin/mandb) 00:01:40.940 Program sphinx-build found: NO 00:01:40.940 Configuring rte_build_config.h using configuration 00:01:40.940 Message: 00:01:40.940 ================= 00:01:40.940 Applications Enabled 00:01:40.940 ================= 00:01:40.940 00:01:40.940 apps: 00:01:40.940 00:01:40.940 00:01:40.940 Message: 00:01:40.940 ================= 00:01:40.940 Libraries Enabled 00:01:40.940 ================= 00:01:40.940 00:01:40.940 libs: 00:01:40.940 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:40.940 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:40.940 cryptodev, dmadev, power, reorder, security, vhost, 00:01:40.940 00:01:40.940 Message: 00:01:40.940 =============== 00:01:40.940 Drivers Enabled 00:01:40.940 =============== 00:01:40.940 00:01:40.940 common: 00:01:40.940 00:01:40.940 bus: 00:01:40.940 pci, vdev, 00:01:40.940 mempool: 00:01:40.940 ring, 00:01:40.940 dma: 00:01:40.940 00:01:40.940 net: 00:01:40.940 00:01:40.940 crypto: 00:01:40.940 00:01:40.940 compress: 00:01:40.940 00:01:40.940 vdpa: 00:01:40.940 00:01:40.940 00:01:40.940 Message: 00:01:40.940 ================= 00:01:40.940 Content Skipped 00:01:40.940 ================= 00:01:40.940 00:01:40.940 apps: 00:01:40.940 dumpcap: explicitly disabled via build config 00:01:40.940 graph: explicitly disabled via build config 00:01:40.940 pdump: explicitly disabled via build config 00:01:40.940 proc-info: explicitly disabled via build config 00:01:40.940 test-acl: explicitly disabled via build config 00:01:40.940 test-bbdev: explicitly disabled via build config 00:01:40.940 test-cmdline: explicitly disabled via build config 00:01:40.940 test-compress-perf: explicitly disabled via build config 00:01:40.940 test-crypto-perf: explicitly disabled via build config 00:01:40.940 test-dma-perf: explicitly disabled via build config 00:01:40.940 test-eventdev: explicitly disabled via build config 00:01:40.940 test-fib: explicitly disabled via build config 00:01:40.940 test-flow-perf: explicitly disabled via build config 00:01:40.940 test-gpudev: explicitly disabled via build config 00:01:40.940 test-mldev: explicitly disabled via build config 00:01:40.940 test-pipeline: explicitly disabled via build config 00:01:40.940 test-pmd: explicitly disabled via build config 00:01:40.940 test-regex: explicitly disabled via build config 00:01:40.940 test-sad: explicitly disabled via build config 00:01:40.940 test-security-perf: explicitly disabled via build config 00:01:40.940 00:01:40.940 libs: 00:01:40.940 argparse: explicitly disabled via build config 00:01:40.940 metrics: explicitly disabled via build config 00:01:40.940 acl: explicitly disabled via build config 00:01:40.940 bbdev: explicitly disabled via build config 00:01:40.940 bitratestats: explicitly disabled via build config 00:01:40.940 bpf: explicitly disabled via build config 00:01:40.940 cfgfile: explicitly disabled via build config 00:01:40.940 distributor: explicitly disabled via build config 00:01:40.940 efd: explicitly disabled via build config 00:01:40.940 eventdev: explicitly disabled via build config 00:01:40.940 dispatcher: explicitly disabled via build config 00:01:40.940 gpudev: explicitly disabled via build config 00:01:40.940 gro: explicitly disabled via build config 00:01:40.940 gso: explicitly disabled via build config 00:01:40.940 ip_frag: explicitly disabled via build config 00:01:40.940 jobstats: explicitly disabled via build config 00:01:40.940 latencystats: explicitly disabled via build config 00:01:40.940 lpm: explicitly disabled via build config 00:01:40.940 member: explicitly disabled via build config 00:01:40.940 pcapng: explicitly disabled via build config 00:01:40.940 rawdev: explicitly disabled via build config 00:01:40.940 regexdev: explicitly disabled via build config 00:01:40.940 mldev: explicitly disabled via build config 00:01:40.940 rib: explicitly disabled via build config 00:01:40.940 sched: explicitly disabled via build config 00:01:40.941 stack: explicitly disabled via build config 00:01:40.941 ipsec: explicitly disabled via build config 00:01:40.941 pdcp: explicitly disabled via build config 00:01:40.941 fib: explicitly disabled via build config 00:01:40.941 port: explicitly disabled via build config 00:01:40.941 pdump: explicitly disabled via build config 00:01:40.941 table: explicitly disabled via build config 00:01:40.941 pipeline: explicitly disabled via build config 00:01:40.941 graph: explicitly disabled via build config 00:01:40.941 node: explicitly disabled via build config 00:01:40.941 00:01:40.941 drivers: 00:01:40.941 common/cpt: not in enabled drivers build config 00:01:40.941 common/dpaax: not in enabled drivers build config 00:01:40.941 common/iavf: not in enabled drivers build config 00:01:40.941 common/idpf: not in enabled drivers build config 00:01:40.941 common/ionic: not in enabled drivers build config 00:01:40.941 common/mvep: not in enabled drivers build config 00:01:40.941 common/octeontx: not in enabled drivers build config 00:01:40.941 bus/auxiliary: not in enabled drivers build config 00:01:40.941 bus/cdx: not in enabled drivers build config 00:01:40.941 bus/dpaa: not in enabled drivers build config 00:01:40.941 bus/fslmc: not in enabled drivers build config 00:01:40.941 bus/ifpga: not in enabled drivers build config 00:01:40.941 bus/platform: not in enabled drivers build config 00:01:40.941 bus/uacce: not in enabled drivers build config 00:01:40.941 bus/vmbus: not in enabled drivers build config 00:01:40.941 common/cnxk: not in enabled drivers build config 00:01:40.941 common/mlx5: not in enabled drivers build config 00:01:40.941 common/nfp: not in enabled drivers build config 00:01:40.941 common/nitrox: not in enabled drivers build config 00:01:40.941 common/qat: not in enabled drivers build config 00:01:40.941 common/sfc_efx: not in enabled drivers build config 00:01:40.941 mempool/bucket: not in enabled drivers build config 00:01:40.941 mempool/cnxk: not in enabled drivers build config 00:01:40.941 mempool/dpaa: not in enabled drivers build config 00:01:40.941 mempool/dpaa2: not in enabled drivers build config 00:01:40.941 mempool/octeontx: not in enabled drivers build config 00:01:40.941 mempool/stack: not in enabled drivers build config 00:01:40.941 dma/cnxk: not in enabled drivers build config 00:01:40.941 dma/dpaa: not in enabled drivers build config 00:01:40.941 dma/dpaa2: not in enabled drivers build config 00:01:40.941 dma/hisilicon: not in enabled drivers build config 00:01:40.941 dma/idxd: not in enabled drivers build config 00:01:40.941 dma/ioat: not in enabled drivers build config 00:01:40.941 dma/skeleton: not in enabled drivers build config 00:01:40.941 net/af_packet: not in enabled drivers build config 00:01:40.941 net/af_xdp: not in enabled drivers build config 00:01:40.941 net/ark: not in enabled drivers build config 00:01:40.941 net/atlantic: not in enabled drivers build config 00:01:40.941 net/avp: not in enabled drivers build config 00:01:40.941 net/axgbe: not in enabled drivers build config 00:01:40.941 net/bnx2x: not in enabled drivers build config 00:01:40.941 net/bnxt: not in enabled drivers build config 00:01:40.941 net/bonding: not in enabled drivers build config 00:01:40.941 net/cnxk: not in enabled drivers build config 00:01:40.941 net/cpfl: not in enabled drivers build config 00:01:40.941 net/cxgbe: not in enabled drivers build config 00:01:40.941 net/dpaa: not in enabled drivers build config 00:01:40.941 net/dpaa2: not in enabled drivers build config 00:01:40.941 net/e1000: not in enabled drivers build config 00:01:40.941 net/ena: not in enabled drivers build config 00:01:40.941 net/enetc: not in enabled drivers build config 00:01:40.941 net/enetfec: not in enabled drivers build config 00:01:40.941 net/enic: not in enabled drivers build config 00:01:40.941 net/failsafe: not in enabled drivers build config 00:01:40.941 net/fm10k: not in enabled drivers build config 00:01:40.941 net/gve: not in enabled drivers build config 00:01:40.941 net/hinic: not in enabled drivers build config 00:01:40.941 net/hns3: not in enabled drivers build config 00:01:40.941 net/i40e: not in enabled drivers build config 00:01:40.941 net/iavf: not in enabled drivers build config 00:01:40.941 net/ice: not in enabled drivers build config 00:01:40.941 net/idpf: not in enabled drivers build config 00:01:40.941 net/igc: not in enabled drivers build config 00:01:40.941 net/ionic: not in enabled drivers build config 00:01:40.941 net/ipn3ke: not in enabled drivers build config 00:01:40.941 net/ixgbe: not in enabled drivers build config 00:01:40.941 net/mana: not in enabled drivers build config 00:01:40.941 net/memif: not in enabled drivers build config 00:01:40.941 net/mlx4: not in enabled drivers build config 00:01:40.941 net/mlx5: not in enabled drivers build config 00:01:40.941 net/mvneta: not in enabled drivers build config 00:01:40.941 net/mvpp2: not in enabled drivers build config 00:01:40.941 net/netvsc: not in enabled drivers build config 00:01:40.941 net/nfb: not in enabled drivers build config 00:01:40.941 net/nfp: not in enabled drivers build config 00:01:40.941 net/ngbe: not in enabled drivers build config 00:01:40.941 net/null: not in enabled drivers build config 00:01:40.941 net/octeontx: not in enabled drivers build config 00:01:40.941 net/octeon_ep: not in enabled drivers build config 00:01:40.941 net/pcap: not in enabled drivers build config 00:01:40.941 net/pfe: not in enabled drivers build config 00:01:40.941 net/qede: not in enabled drivers build config 00:01:40.941 net/ring: not in enabled drivers build config 00:01:40.941 net/sfc: not in enabled drivers build config 00:01:40.941 net/softnic: not in enabled drivers build config 00:01:40.941 net/tap: not in enabled drivers build config 00:01:40.941 net/thunderx: not in enabled drivers build config 00:01:40.941 net/txgbe: not in enabled drivers build config 00:01:40.941 net/vdev_netvsc: not in enabled drivers build config 00:01:40.941 net/vhost: not in enabled drivers build config 00:01:40.941 net/virtio: not in enabled drivers build config 00:01:40.941 net/vmxnet3: not in enabled drivers build config 00:01:40.941 raw/*: missing internal dependency, "rawdev" 00:01:40.941 crypto/armv8: not in enabled drivers build config 00:01:40.941 crypto/bcmfs: not in enabled drivers build config 00:01:40.941 crypto/caam_jr: not in enabled drivers build config 00:01:40.941 crypto/ccp: not in enabled drivers build config 00:01:40.941 crypto/cnxk: not in enabled drivers build config 00:01:40.941 crypto/dpaa_sec: not in enabled drivers build config 00:01:40.941 crypto/dpaa2_sec: not in enabled drivers build config 00:01:40.941 crypto/ipsec_mb: not in enabled drivers build config 00:01:40.941 crypto/mlx5: not in enabled drivers build config 00:01:40.941 crypto/mvsam: not in enabled drivers build config 00:01:40.941 crypto/nitrox: not in enabled drivers build config 00:01:40.941 crypto/null: not in enabled drivers build config 00:01:40.941 crypto/octeontx: not in enabled drivers build config 00:01:40.941 crypto/openssl: not in enabled drivers build config 00:01:40.941 crypto/scheduler: not in enabled drivers build config 00:01:40.941 crypto/uadk: not in enabled drivers build config 00:01:40.941 crypto/virtio: not in enabled drivers build config 00:01:40.941 compress/isal: not in enabled drivers build config 00:01:40.941 compress/mlx5: not in enabled drivers build config 00:01:40.941 compress/nitrox: not in enabled drivers build config 00:01:40.941 compress/octeontx: not in enabled drivers build config 00:01:40.941 compress/zlib: not in enabled drivers build config 00:01:40.941 regex/*: missing internal dependency, "regexdev" 00:01:40.941 ml/*: missing internal dependency, "mldev" 00:01:40.941 vdpa/ifc: not in enabled drivers build config 00:01:40.941 vdpa/mlx5: not in enabled drivers build config 00:01:40.941 vdpa/nfp: not in enabled drivers build config 00:01:40.941 vdpa/sfc: not in enabled drivers build config 00:01:40.941 event/*: missing internal dependency, "eventdev" 00:01:40.941 baseband/*: missing internal dependency, "bbdev" 00:01:40.941 gpu/*: missing internal dependency, "gpudev" 00:01:40.941 00:01:40.941 00:01:40.941 Build targets in project: 85 00:01:40.941 00:01:40.941 DPDK 24.03.0 00:01:40.941 00:01:40.941 User defined options 00:01:40.941 buildtype : debug 00:01:40.941 default_library : shared 00:01:40.941 libdir : lib 00:01:40.941 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:40.941 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:40.941 c_link_args : 00:01:40.941 cpu_instruction_set: native 00:01:40.941 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:40.941 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:40.941 enable_docs : false 00:01:40.941 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:40.941 enable_kmods : false 00:01:40.941 max_lcores : 128 00:01:40.941 tests : false 00:01:40.941 00:01:40.941 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.540 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:41.540 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:41.540 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:41.540 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:41.540 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:41.540 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:41.540 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:41.540 [7/268] Linking static target lib/librte_kvargs.a 00:01:41.540 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:41.540 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:41.540 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:41.540 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:41.540 [12/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:41.820 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:41.820 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.820 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:41.820 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.820 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.820 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:41.820 [19/268] Linking static target lib/librte_log.a 00:01:41.820 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:41.820 [21/268] Linking static target lib/librte_pci.a 00:01:41.820 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:41.820 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:41.820 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:42.086 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:42.086 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:42.086 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:42.086 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:42.086 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:42.086 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:42.086 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:42.086 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:42.086 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:42.086 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:42.086 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:42.086 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:42.086 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:42.086 [38/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:42.086 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:42.086 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:42.086 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:42.086 [42/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:42.086 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:42.086 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:42.086 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:42.086 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:42.086 [47/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:42.086 [48/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:42.086 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:42.086 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:42.086 [51/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:42.086 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:42.086 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:42.086 [54/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:42.086 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:42.086 [56/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:42.086 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:42.086 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:42.086 [59/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:42.086 [60/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:42.086 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:42.387 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:42.387 [63/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:42.387 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:42.387 [65/268] Linking static target lib/librte_ring.a 00:01:42.387 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:42.387 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:42.387 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:42.387 [69/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:42.387 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:42.387 [71/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:42.387 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:42.387 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:42.387 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:42.387 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:42.387 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:42.387 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:42.387 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:42.387 [79/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:42.387 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:42.387 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:42.387 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:42.387 [83/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:42.387 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:42.387 [85/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:42.387 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:42.387 [87/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.387 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:42.387 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:42.387 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:42.387 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:42.387 [92/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:42.387 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:42.387 [94/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:42.387 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:42.387 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:42.387 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:42.387 [98/268] Linking static target lib/librte_meter.a 00:01:42.387 [99/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:42.387 [100/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:42.387 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:42.387 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.387 [103/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:42.387 [104/268] Linking static target lib/librte_mempool.a 00:01:42.387 [105/268] Linking static target lib/librte_telemetry.a 00:01:42.387 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:42.387 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:42.387 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:42.387 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:42.387 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:42.387 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:42.387 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:42.387 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:42.387 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:42.387 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:42.387 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:42.387 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:42.387 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:42.387 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:42.387 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:42.387 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:42.387 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:42.387 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:42.387 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:42.387 [125/268] Linking static target lib/librte_rcu.a 00:01:42.387 [126/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:42.387 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:42.387 [128/268] Linking static target lib/librte_cmdline.a 00:01:42.387 [129/268] Linking static target lib/librte_net.a 00:01:42.646 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:42.646 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:42.646 [132/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:42.646 [133/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:42.646 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:42.646 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:42.646 [136/268] Linking static target lib/librte_eal.a 00:01:42.646 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.646 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.646 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:42.646 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:42.646 [141/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:42.646 [142/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:42.646 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.646 [144/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.646 [145/268] Linking target lib/librte_log.so.24.1 00:01:42.646 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:42.646 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:42.646 [148/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:42.646 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:42.646 [150/268] Linking static target lib/librte_mbuf.a 00:01:42.646 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:42.646 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:42.646 [153/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:42.646 [154/268] Linking static target lib/librte_timer.a 00:01:42.646 [155/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:42.646 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.646 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:42.646 [158/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.646 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:42.646 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:42.646 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:42.646 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:42.646 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:42.646 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:42.646 [165/268] Linking static target lib/librte_compressdev.a 00:01:42.646 [166/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.646 [167/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:42.646 [168/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:42.646 [169/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.646 [170/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.646 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.906 [172/268] Linking target lib/librte_telemetry.so.24.1 00:01:42.906 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:42.906 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:42.906 [175/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.906 [176/268] Linking target lib/librte_kvargs.so.24.1 00:01:42.906 [177/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:42.906 [178/268] Linking static target lib/librte_power.a 00:01:42.906 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:42.906 [180/268] Linking static target lib/librte_dmadev.a 00:01:42.906 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:42.906 [182/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.906 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:42.906 [184/268] Linking static target lib/librte_reorder.a 00:01:42.906 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:42.906 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.906 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:42.906 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:42.906 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.906 [190/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.906 [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.906 [192/268] Linking static target drivers/librte_bus_vdev.a 00:01:42.906 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:42.906 [194/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:42.906 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:42.906 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.906 [197/268] Linking static target lib/librte_security.a 00:01:42.906 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:42.906 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:42.906 [200/268] Linking static target lib/librte_hash.a 00:01:42.906 [201/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.906 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:43.165 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:43.165 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:43.165 [205/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:43.165 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:43.165 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.165 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.165 [209/268] Linking static target lib/librte_cryptodev.a 00:01:43.165 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.165 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.165 [212/268] Linking static target drivers/librte_mempool_ring.a 00:01:43.165 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:43.165 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.165 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.165 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.424 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:43.424 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.424 [219/268] Linking static target lib/librte_ethdev.a 00:01:43.424 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.424 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.424 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.683 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.683 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.683 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:43.941 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.941 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.874 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:44.874 [229/268] Linking static target lib/librte_vhost.a 00:01:44.874 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.775 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.044 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.612 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.612 [234/268] Linking target lib/librte_eal.so.24.1 00:01:52.871 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.871 [236/268] Linking target lib/librte_ring.so.24.1 00:01:52.871 [237/268] Linking target lib/librte_timer.so.24.1 00:01:52.871 [238/268] Linking target lib/librte_meter.so.24.1 00:01:52.871 [239/268] Linking target lib/librte_pci.so.24.1 00:01:52.871 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.871 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:52.871 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.871 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.871 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.871 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.871 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:53.130 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:53.130 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:53.130 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:53.130 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:53.130 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:53.130 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:53.130 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:53.389 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:53.389 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:53.389 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:53.389 [257/268] Linking target lib/librte_net.so.24.1 00:01:53.389 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:53.389 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:53.389 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:53.648 [261/268] Linking target lib/librte_security.so.24.1 00:01:53.648 [262/268] Linking target lib/librte_hash.so.24.1 00:01:53.648 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:53.648 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:53.648 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:53.648 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:53.648 [267/268] Linking target lib/librte_power.so.24.1 00:01:53.648 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:53.648 INFO: autodetecting backend as ninja 00:01:53.648 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:05.857 CC lib/log/log.o 00:02:05.857 CC lib/log/log_flags.o 00:02:05.857 CC lib/ut_mock/mock.o 00:02:05.857 CC lib/log/log_deprecated.o 00:02:05.857 CC lib/ut/ut.o 00:02:05.857 LIB libspdk_log.a 00:02:05.857 LIB libspdk_ut_mock.a 00:02:05.857 LIB libspdk_ut.a 00:02:05.857 SO libspdk_log.so.7.1 00:02:05.857 SO libspdk_ut.so.2.0 00:02:05.857 SO libspdk_ut_mock.so.6.0 00:02:05.857 SYMLINK libspdk_log.so 00:02:05.857 SYMLINK libspdk_ut.so 00:02:05.857 SYMLINK libspdk_ut_mock.so 00:02:05.857 CC lib/ioat/ioat.o 00:02:05.857 CC lib/util/base64.o 00:02:05.857 CC lib/util/bit_array.o 00:02:05.857 CC lib/util/cpuset.o 00:02:05.857 CXX lib/trace_parser/trace.o 00:02:05.857 CC lib/dma/dma.o 00:02:05.857 CC lib/util/crc16.o 00:02:05.857 CC lib/util/crc32.o 00:02:05.857 CC lib/util/crc32c.o 00:02:05.857 CC lib/util/crc32_ieee.o 00:02:05.857 CC lib/util/crc64.o 00:02:05.857 CC lib/util/dif.o 00:02:05.857 CC lib/util/fd.o 00:02:05.857 CC lib/util/fd_group.o 00:02:05.857 CC lib/util/file.o 00:02:05.857 CC lib/util/hexlify.o 00:02:05.857 CC lib/util/iov.o 00:02:05.857 CC lib/util/math.o 00:02:05.857 CC lib/util/net.o 00:02:05.857 CC lib/util/pipe.o 00:02:05.857 CC lib/util/strerror_tls.o 00:02:05.857 CC lib/util/string.o 00:02:05.857 CC lib/util/uuid.o 00:02:05.857 CC lib/util/xor.o 00:02:05.857 CC lib/util/zipf.o 00:02:05.857 CC lib/util/md5.o 00:02:05.857 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.857 CC lib/vfio_user/host/vfio_user.o 00:02:05.857 LIB libspdk_dma.a 00:02:05.857 SO libspdk_dma.so.5.0 00:02:05.857 LIB libspdk_ioat.a 00:02:05.857 SYMLINK libspdk_dma.so 00:02:05.857 SO libspdk_ioat.so.7.0 00:02:05.857 SYMLINK libspdk_ioat.so 00:02:05.857 LIB libspdk_vfio_user.a 00:02:05.857 SO libspdk_vfio_user.so.5.0 00:02:05.857 LIB libspdk_util.a 00:02:05.857 SYMLINK libspdk_vfio_user.so 00:02:05.857 SO libspdk_util.so.10.1 00:02:05.857 SYMLINK libspdk_util.so 00:02:05.857 LIB libspdk_trace_parser.a 00:02:05.857 SO libspdk_trace_parser.so.6.0 00:02:05.857 SYMLINK libspdk_trace_parser.so 00:02:05.857 CC lib/rdma_utils/rdma_utils.o 00:02:05.857 CC lib/env_dpdk/env.o 00:02:05.857 CC lib/json/json_parse.o 00:02:05.857 CC lib/idxd/idxd.o 00:02:05.857 CC lib/env_dpdk/memory.o 00:02:05.857 CC lib/json/json_util.o 00:02:05.857 CC lib/idxd/idxd_user.o 00:02:05.857 CC lib/env_dpdk/pci.o 00:02:05.857 CC lib/json/json_write.o 00:02:05.857 CC lib/env_dpdk/init.o 00:02:05.857 CC lib/idxd/idxd_kernel.o 00:02:05.857 CC lib/env_dpdk/threads.o 00:02:05.857 CC lib/env_dpdk/pci_ioat.o 00:02:05.857 CC lib/env_dpdk/pci_virtio.o 00:02:05.857 CC lib/vmd/vmd.o 00:02:05.857 CC lib/env_dpdk/pci_vmd.o 00:02:05.857 CC lib/conf/conf.o 00:02:05.857 CC lib/env_dpdk/pci_idxd.o 00:02:05.857 CC lib/vmd/led.o 00:02:05.857 CC lib/env_dpdk/pci_event.o 00:02:05.857 CC lib/env_dpdk/sigbus_handler.o 00:02:05.857 CC lib/env_dpdk/pci_dpdk.o 00:02:05.857 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.857 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:05.857 LIB libspdk_conf.a 00:02:05.857 LIB libspdk_rdma_utils.a 00:02:05.857 LIB libspdk_json.a 00:02:05.857 SO libspdk_conf.so.6.0 00:02:05.857 SO libspdk_rdma_utils.so.1.0 00:02:05.857 SO libspdk_json.so.6.0 00:02:05.857 SYMLINK libspdk_conf.so 00:02:05.857 SYMLINK libspdk_rdma_utils.so 00:02:05.857 SYMLINK libspdk_json.so 00:02:05.857 LIB libspdk_idxd.a 00:02:05.857 LIB libspdk_vmd.a 00:02:05.857 SO libspdk_idxd.so.12.1 00:02:05.857 SO libspdk_vmd.so.6.0 00:02:06.116 SYMLINK libspdk_idxd.so 00:02:06.116 SYMLINK libspdk_vmd.so 00:02:06.116 CC lib/rdma_provider/common.o 00:02:06.116 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:06.116 CC lib/jsonrpc/jsonrpc_server.o 00:02:06.116 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:06.116 CC lib/jsonrpc/jsonrpc_client.o 00:02:06.116 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.376 LIB libspdk_rdma_provider.a 00:02:06.376 SO libspdk_rdma_provider.so.7.0 00:02:06.376 LIB libspdk_jsonrpc.a 00:02:06.376 SO libspdk_jsonrpc.so.6.0 00:02:06.376 SYMLINK libspdk_rdma_provider.so 00:02:06.376 SYMLINK libspdk_jsonrpc.so 00:02:06.376 LIB libspdk_env_dpdk.a 00:02:06.635 SO libspdk_env_dpdk.so.15.1 00:02:06.635 SYMLINK libspdk_env_dpdk.so 00:02:06.894 CC lib/rpc/rpc.o 00:02:06.894 LIB libspdk_rpc.a 00:02:06.894 SO libspdk_rpc.so.6.0 00:02:07.153 SYMLINK libspdk_rpc.so 00:02:07.410 CC lib/keyring/keyring.o 00:02:07.411 CC lib/keyring/keyring_rpc.o 00:02:07.411 CC lib/notify/notify.o 00:02:07.411 CC lib/notify/notify_rpc.o 00:02:07.411 CC lib/trace/trace.o 00:02:07.411 CC lib/trace/trace_flags.o 00:02:07.411 CC lib/trace/trace_rpc.o 00:02:07.669 LIB libspdk_notify.a 00:02:07.669 SO libspdk_notify.so.6.0 00:02:07.669 LIB libspdk_keyring.a 00:02:07.669 LIB libspdk_trace.a 00:02:07.669 SO libspdk_keyring.so.2.0 00:02:07.669 SYMLINK libspdk_notify.so 00:02:07.669 SO libspdk_trace.so.11.0 00:02:07.669 SYMLINK libspdk_keyring.so 00:02:07.669 SYMLINK libspdk_trace.so 00:02:07.928 CC lib/sock/sock.o 00:02:07.928 CC lib/sock/sock_rpc.o 00:02:07.928 CC lib/thread/thread.o 00:02:08.186 CC lib/thread/iobuf.o 00:02:08.445 LIB libspdk_sock.a 00:02:08.445 SO libspdk_sock.so.10.0 00:02:08.445 SYMLINK libspdk_sock.so 00:02:08.704 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.704 CC lib/nvme/nvme_ctrlr.o 00:02:08.704 CC lib/nvme/nvme_fabric.o 00:02:08.704 CC lib/nvme/nvme_ns_cmd.o 00:02:08.704 CC lib/nvme/nvme_ns.o 00:02:08.704 CC lib/nvme/nvme_pcie_common.o 00:02:08.704 CC lib/nvme/nvme_pcie.o 00:02:08.704 CC lib/nvme/nvme_qpair.o 00:02:08.704 CC lib/nvme/nvme.o 00:02:08.704 CC lib/nvme/nvme_quirks.o 00:02:08.704 CC lib/nvme/nvme_transport.o 00:02:08.704 CC lib/nvme/nvme_discovery.o 00:02:08.704 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.704 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.704 CC lib/nvme/nvme_tcp.o 00:02:08.704 CC lib/nvme/nvme_opal.o 00:02:08.704 CC lib/nvme/nvme_io_msg.o 00:02:08.704 CC lib/nvme/nvme_poll_group.o 00:02:08.704 CC lib/nvme/nvme_zns.o 00:02:08.704 CC lib/nvme/nvme_stubs.o 00:02:08.704 CC lib/nvme/nvme_auth.o 00:02:08.704 CC lib/nvme/nvme_cuse.o 00:02:08.704 CC lib/nvme/nvme_vfio_user.o 00:02:08.704 CC lib/nvme/nvme_rdma.o 00:02:09.271 LIB libspdk_thread.a 00:02:09.271 SO libspdk_thread.so.11.0 00:02:09.271 SYMLINK libspdk_thread.so 00:02:09.529 CC lib/init/subsystem.o 00:02:09.529 CC lib/init/json_config.o 00:02:09.529 CC lib/virtio/virtio.o 00:02:09.529 CC lib/init/subsystem_rpc.o 00:02:09.529 CC lib/init/rpc.o 00:02:09.529 CC lib/virtio/virtio_vhost_user.o 00:02:09.529 CC lib/virtio/virtio_pci.o 00:02:09.529 CC lib/virtio/virtio_vfio_user.o 00:02:09.529 CC lib/vfu_tgt/tgt_rpc.o 00:02:09.529 CC lib/vfu_tgt/tgt_endpoint.o 00:02:09.529 CC lib/accel/accel.o 00:02:09.529 CC lib/accel/accel_rpc.o 00:02:09.529 CC lib/accel/accel_sw.o 00:02:09.529 CC lib/fsdev/fsdev.o 00:02:09.529 CC lib/fsdev/fsdev_io.o 00:02:09.529 CC lib/fsdev/fsdev_rpc.o 00:02:09.529 CC lib/blob/blobstore.o 00:02:09.529 CC lib/blob/request.o 00:02:09.529 CC lib/blob/zeroes.o 00:02:09.529 CC lib/blob/blob_bs_dev.o 00:02:09.788 LIB libspdk_init.a 00:02:09.788 SO libspdk_init.so.6.0 00:02:09.788 LIB libspdk_virtio.a 00:02:09.788 LIB libspdk_vfu_tgt.a 00:02:09.788 SYMLINK libspdk_init.so 00:02:09.788 SO libspdk_virtio.so.7.0 00:02:09.788 SO libspdk_vfu_tgt.so.3.0 00:02:10.047 SYMLINK libspdk_vfu_tgt.so 00:02:10.047 SYMLINK libspdk_virtio.so 00:02:10.047 LIB libspdk_fsdev.a 00:02:10.047 SO libspdk_fsdev.so.2.0 00:02:10.305 CC lib/event/app.o 00:02:10.305 CC lib/event/reactor.o 00:02:10.305 CC lib/event/log_rpc.o 00:02:10.305 CC lib/event/app_rpc.o 00:02:10.305 CC lib/event/scheduler_static.o 00:02:10.305 SYMLINK libspdk_fsdev.so 00:02:10.305 LIB libspdk_accel.a 00:02:10.305 SO libspdk_accel.so.16.0 00:02:10.564 SYMLINK libspdk_accel.so 00:02:10.564 LIB libspdk_nvme.a 00:02:10.564 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:10.564 LIB libspdk_event.a 00:02:10.564 SO libspdk_event.so.14.0 00:02:10.564 SO libspdk_nvme.so.15.0 00:02:10.564 SYMLINK libspdk_event.so 00:02:10.823 SYMLINK libspdk_nvme.so 00:02:10.823 CC lib/bdev/bdev.o 00:02:10.823 CC lib/bdev/bdev_rpc.o 00:02:10.823 CC lib/bdev/bdev_zone.o 00:02:10.823 CC lib/bdev/part.o 00:02:10.823 CC lib/bdev/scsi_nvme.o 00:02:11.082 LIB libspdk_fuse_dispatcher.a 00:02:11.082 SO libspdk_fuse_dispatcher.so.1.0 00:02:11.082 SYMLINK libspdk_fuse_dispatcher.so 00:02:11.648 LIB libspdk_blob.a 00:02:11.648 SO libspdk_blob.so.12.0 00:02:11.907 SYMLINK libspdk_blob.so 00:02:12.165 CC lib/blobfs/blobfs.o 00:02:12.165 CC lib/blobfs/tree.o 00:02:12.165 CC lib/lvol/lvol.o 00:02:12.733 LIB libspdk_bdev.a 00:02:12.733 LIB libspdk_blobfs.a 00:02:12.733 SO libspdk_bdev.so.17.0 00:02:12.733 SO libspdk_blobfs.so.11.0 00:02:12.733 LIB libspdk_lvol.a 00:02:12.733 SYMLINK libspdk_blobfs.so 00:02:12.733 SYMLINK libspdk_bdev.so 00:02:12.733 SO libspdk_lvol.so.11.0 00:02:12.992 SYMLINK libspdk_lvol.so 00:02:13.254 CC lib/ublk/ublk.o 00:02:13.254 CC lib/nbd/nbd.o 00:02:13.254 CC lib/ublk/ublk_rpc.o 00:02:13.254 CC lib/nbd/nbd_rpc.o 00:02:13.254 CC lib/ftl/ftl_core.o 00:02:13.254 CC lib/ftl/ftl_init.o 00:02:13.254 CC lib/nvmf/ctrlr.o 00:02:13.254 CC lib/ftl/ftl_layout.o 00:02:13.254 CC lib/nvmf/ctrlr_discovery.o 00:02:13.254 CC lib/ftl/ftl_debug.o 00:02:13.254 CC lib/nvmf/ctrlr_bdev.o 00:02:13.254 CC lib/ftl/ftl_io.o 00:02:13.254 CC lib/nvmf/subsystem.o 00:02:13.254 CC lib/ftl/ftl_sb.o 00:02:13.254 CC lib/scsi/dev.o 00:02:13.254 CC lib/nvmf/nvmf.o 00:02:13.254 CC lib/ftl/ftl_l2p.o 00:02:13.254 CC lib/scsi/lun.o 00:02:13.254 CC lib/nvmf/nvmf_rpc.o 00:02:13.254 CC lib/ftl/ftl_l2p_flat.o 00:02:13.254 CC lib/scsi/port.o 00:02:13.254 CC lib/nvmf/transport.o 00:02:13.254 CC lib/ftl/ftl_nv_cache.o 00:02:13.254 CC lib/scsi/scsi.o 00:02:13.254 CC lib/nvmf/tcp.o 00:02:13.254 CC lib/ftl/ftl_band.o 00:02:13.254 CC lib/nvmf/stubs.o 00:02:13.254 CC lib/nvmf/mdns_server.o 00:02:13.254 CC lib/scsi/scsi_bdev.o 00:02:13.254 CC lib/ftl/ftl_band_ops.o 00:02:13.254 CC lib/nvmf/rdma.o 00:02:13.254 CC lib/ftl/ftl_writer.o 00:02:13.254 CC lib/scsi/scsi_pr.o 00:02:13.254 CC lib/nvmf/vfio_user.o 00:02:13.254 CC lib/scsi/scsi_rpc.o 00:02:13.254 CC lib/scsi/task.o 00:02:13.254 CC lib/nvmf/auth.o 00:02:13.254 CC lib/ftl/ftl_rq.o 00:02:13.254 CC lib/ftl/ftl_reloc.o 00:02:13.254 CC lib/ftl/ftl_l2p_cache.o 00:02:13.254 CC lib/ftl/ftl_p2l.o 00:02:13.254 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:13.255 CC lib/ftl/ftl_p2l_log.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.255 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.255 CC lib/ftl/utils/ftl_conf.o 00:02:13.255 CC lib/ftl/utils/ftl_md.o 00:02:13.255 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.255 CC lib/ftl/utils/ftl_mempool.o 00:02:13.255 CC lib/ftl/utils/ftl_property.o 00:02:13.255 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.255 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.255 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.255 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.255 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.255 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.255 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:13.255 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.255 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.255 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.255 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.255 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:13.255 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.255 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:13.255 CC lib/ftl/base/ftl_base_dev.o 00:02:13.255 CC lib/ftl/ftl_trace.o 00:02:13.822 LIB libspdk_nbd.a 00:02:13.822 SO libspdk_nbd.so.7.0 00:02:13.822 LIB libspdk_scsi.a 00:02:13.822 SO libspdk_scsi.so.9.0 00:02:13.822 SYMLINK libspdk_nbd.so 00:02:13.822 LIB libspdk_ublk.a 00:02:14.081 SO libspdk_ublk.so.3.0 00:02:14.081 SYMLINK libspdk_scsi.so 00:02:14.081 SYMLINK libspdk_ublk.so 00:02:14.340 LIB libspdk_ftl.a 00:02:14.340 CC lib/iscsi/conn.o 00:02:14.340 CC lib/iscsi/init_grp.o 00:02:14.340 CC lib/iscsi/iscsi.o 00:02:14.340 CC lib/iscsi/param.o 00:02:14.340 CC lib/iscsi/portal_grp.o 00:02:14.340 CC lib/vhost/vhost.o 00:02:14.340 CC lib/iscsi/tgt_node.o 00:02:14.340 CC lib/vhost/vhost_rpc.o 00:02:14.340 CC lib/iscsi/iscsi_subsystem.o 00:02:14.340 CC lib/iscsi/iscsi_rpc.o 00:02:14.340 CC lib/vhost/vhost_scsi.o 00:02:14.340 CC lib/vhost/vhost_blk.o 00:02:14.340 CC lib/iscsi/task.o 00:02:14.340 CC lib/vhost/rte_vhost_user.o 00:02:14.340 SO libspdk_ftl.so.9.0 00:02:14.599 SYMLINK libspdk_ftl.so 00:02:15.167 LIB libspdk_nvmf.a 00:02:15.167 SO libspdk_nvmf.so.20.0 00:02:15.167 LIB libspdk_vhost.a 00:02:15.167 SO libspdk_vhost.so.8.0 00:02:15.167 SYMLINK libspdk_nvmf.so 00:02:15.167 SYMLINK libspdk_vhost.so 00:02:15.426 LIB libspdk_iscsi.a 00:02:15.426 SO libspdk_iscsi.so.8.0 00:02:15.426 SYMLINK libspdk_iscsi.so 00:02:15.994 CC module/env_dpdk/env_dpdk_rpc.o 00:02:15.994 CC module/vfu_device/vfu_virtio.o 00:02:15.994 CC module/vfu_device/vfu_virtio_blk.o 00:02:15.994 CC module/vfu_device/vfu_virtio_scsi.o 00:02:15.994 CC module/vfu_device/vfu_virtio_rpc.o 00:02:15.994 CC module/vfu_device/vfu_virtio_fs.o 00:02:16.253 LIB libspdk_env_dpdk_rpc.a 00:02:16.253 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.253 CC module/accel/dsa/accel_dsa.o 00:02:16.253 CC module/accel/ioat/accel_ioat.o 00:02:16.253 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.253 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.253 CC module/keyring/file/keyring.o 00:02:16.253 CC module/keyring/file/keyring_rpc.o 00:02:16.253 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.253 CC module/fsdev/aio/fsdev_aio.o 00:02:16.253 CC module/accel/error/accel_error.o 00:02:16.253 CC module/accel/error/accel_error_rpc.o 00:02:16.253 CC module/blob/bdev/blob_bdev.o 00:02:16.253 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:16.253 CC module/fsdev/aio/linux_aio_mgr.o 00:02:16.253 CC module/accel/iaa/accel_iaa.o 00:02:16.253 CC module/accel/iaa/accel_iaa_rpc.o 00:02:16.253 CC module/keyring/linux/keyring.o 00:02:16.253 CC module/keyring/linux/keyring_rpc.o 00:02:16.253 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:16.253 CC module/sock/posix/posix.o 00:02:16.253 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.253 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.253 LIB libspdk_keyring_file.a 00:02:16.253 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.253 LIB libspdk_keyring_linux.a 00:02:16.511 LIB libspdk_accel_ioat.a 00:02:16.511 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:16.511 SO libspdk_keyring_file.so.2.0 00:02:16.511 SO libspdk_keyring_linux.so.1.0 00:02:16.511 LIB libspdk_scheduler_gscheduler.a 00:02:16.511 LIB libspdk_accel_iaa.a 00:02:16.511 SO libspdk_accel_ioat.so.6.0 00:02:16.511 LIB libspdk_scheduler_dynamic.a 00:02:16.511 LIB libspdk_accel_error.a 00:02:16.511 SO libspdk_scheduler_dynamic.so.4.0 00:02:16.511 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.511 SO libspdk_accel_iaa.so.3.0 00:02:16.511 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.511 SYMLINK libspdk_keyring_linux.so 00:02:16.511 SYMLINK libspdk_keyring_file.so 00:02:16.511 SO libspdk_accel_error.so.2.0 00:02:16.511 LIB libspdk_blob_bdev.a 00:02:16.511 LIB libspdk_accel_dsa.a 00:02:16.511 SYMLINK libspdk_accel_ioat.so 00:02:16.511 SYMLINK libspdk_scheduler_gscheduler.so 00:02:16.511 SYMLINK libspdk_scheduler_dynamic.so 00:02:16.511 SO libspdk_blob_bdev.so.12.0 00:02:16.511 SO libspdk_accel_dsa.so.5.0 00:02:16.511 SYMLINK libspdk_accel_iaa.so 00:02:16.511 SYMLINK libspdk_accel_error.so 00:02:16.511 SYMLINK libspdk_blob_bdev.so 00:02:16.511 SYMLINK libspdk_accel_dsa.so 00:02:16.511 LIB libspdk_vfu_device.a 00:02:16.511 SO libspdk_vfu_device.so.3.0 00:02:16.770 SYMLINK libspdk_vfu_device.so 00:02:16.770 LIB libspdk_fsdev_aio.a 00:02:16.770 SO libspdk_fsdev_aio.so.1.0 00:02:16.770 LIB libspdk_sock_posix.a 00:02:16.770 SO libspdk_sock_posix.so.6.0 00:02:16.770 SYMLINK libspdk_fsdev_aio.so 00:02:17.028 SYMLINK libspdk_sock_posix.so 00:02:17.028 CC module/bdev/error/vbdev_error.o 00:02:17.028 CC module/bdev/error/vbdev_error_rpc.o 00:02:17.028 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:17.028 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:17.028 CC module/bdev/passthru/vbdev_passthru.o 00:02:17.028 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:17.029 CC module/bdev/lvol/vbdev_lvol.o 00:02:17.029 CC module/bdev/malloc/bdev_malloc.o 00:02:17.029 CC module/bdev/delay/vbdev_delay.o 00:02:17.029 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:17.029 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:17.029 CC module/bdev/gpt/gpt.o 00:02:17.029 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:17.029 CC module/bdev/gpt/vbdev_gpt.o 00:02:17.029 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:17.029 CC module/blobfs/bdev/blobfs_bdev.o 00:02:17.029 CC module/bdev/nvme/bdev_nvme.o 00:02:17.029 CC module/bdev/null/bdev_null.o 00:02:17.029 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:17.029 CC module/bdev/null/bdev_null_rpc.o 00:02:17.029 CC module/bdev/nvme/nvme_rpc.o 00:02:17.029 CC module/bdev/nvme/bdev_mdns_client.o 00:02:17.029 CC module/bdev/raid/bdev_raid.o 00:02:17.029 CC module/bdev/raid/bdev_raid_rpc.o 00:02:17.029 CC module/bdev/nvme/vbdev_opal.o 00:02:17.029 CC module/bdev/raid/bdev_raid_sb.o 00:02:17.029 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:17.029 CC module/bdev/split/vbdev_split.o 00:02:17.029 CC module/bdev/raid/raid0.o 00:02:17.029 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:17.029 CC module/bdev/split/vbdev_split_rpc.o 00:02:17.029 CC module/bdev/raid/raid1.o 00:02:17.029 CC module/bdev/raid/concat.o 00:02:17.029 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:17.029 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:17.029 CC module/bdev/ftl/bdev_ftl.o 00:02:17.029 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:17.029 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.029 CC module/bdev/aio/bdev_aio.o 00:02:17.029 CC module/bdev/aio/bdev_aio_rpc.o 00:02:17.029 CC module/bdev/iscsi/bdev_iscsi.o 00:02:17.029 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:17.287 LIB libspdk_blobfs_bdev.a 00:02:17.287 SO libspdk_blobfs_bdev.so.6.0 00:02:17.287 LIB libspdk_bdev_error.a 00:02:17.287 LIB libspdk_bdev_split.a 00:02:17.287 LIB libspdk_bdev_null.a 00:02:17.287 SO libspdk_bdev_error.so.6.0 00:02:17.287 LIB libspdk_bdev_passthru.a 00:02:17.287 SO libspdk_bdev_split.so.6.0 00:02:17.287 SYMLINK libspdk_blobfs_bdev.so 00:02:17.287 SO libspdk_bdev_null.so.6.0 00:02:17.287 SO libspdk_bdev_passthru.so.6.0 00:02:17.546 LIB libspdk_bdev_gpt.a 00:02:17.546 SYMLINK libspdk_bdev_error.so 00:02:17.546 LIB libspdk_bdev_ftl.a 00:02:17.546 SO libspdk_bdev_gpt.so.6.0 00:02:17.546 LIB libspdk_bdev_aio.a 00:02:17.546 SYMLINK libspdk_bdev_split.so 00:02:17.546 LIB libspdk_bdev_zone_block.a 00:02:17.546 SYMLINK libspdk_bdev_passthru.so 00:02:17.546 SYMLINK libspdk_bdev_null.so 00:02:17.546 SO libspdk_bdev_ftl.so.6.0 00:02:17.546 SO libspdk_bdev_aio.so.6.0 00:02:17.546 LIB libspdk_bdev_iscsi.a 00:02:17.546 LIB libspdk_bdev_delay.a 00:02:17.546 LIB libspdk_bdev_malloc.a 00:02:17.546 SO libspdk_bdev_zone_block.so.6.0 00:02:17.546 SYMLINK libspdk_bdev_gpt.so 00:02:17.546 SO libspdk_bdev_malloc.so.6.0 00:02:17.546 SO libspdk_bdev_iscsi.so.6.0 00:02:17.546 SO libspdk_bdev_delay.so.6.0 00:02:17.546 SYMLINK libspdk_bdev_ftl.so 00:02:17.546 SYMLINK libspdk_bdev_aio.so 00:02:17.546 SYMLINK libspdk_bdev_zone_block.so 00:02:17.546 SYMLINK libspdk_bdev_malloc.so 00:02:17.546 SYMLINK libspdk_bdev_iscsi.so 00:02:17.546 SYMLINK libspdk_bdev_delay.so 00:02:17.546 LIB libspdk_bdev_virtio.a 00:02:17.546 LIB libspdk_bdev_lvol.a 00:02:17.546 SO libspdk_bdev_virtio.so.6.0 00:02:17.546 SO libspdk_bdev_lvol.so.6.0 00:02:17.805 SYMLINK libspdk_bdev_virtio.so 00:02:17.805 SYMLINK libspdk_bdev_lvol.so 00:02:18.064 LIB libspdk_bdev_raid.a 00:02:18.064 SO libspdk_bdev_raid.so.6.0 00:02:18.064 SYMLINK libspdk_bdev_raid.so 00:02:19.001 LIB libspdk_bdev_nvme.a 00:02:19.001 SO libspdk_bdev_nvme.so.7.1 00:02:19.260 SYMLINK libspdk_bdev_nvme.so 00:02:19.827 CC module/event/subsystems/vmd/vmd.o 00:02:19.827 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:19.827 CC module/event/subsystems/scheduler/scheduler.o 00:02:19.827 CC module/event/subsystems/iobuf/iobuf.o 00:02:19.827 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:19.827 CC module/event/subsystems/keyring/keyring.o 00:02:19.827 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:19.827 CC module/event/subsystems/sock/sock.o 00:02:19.827 CC module/event/subsystems/fsdev/fsdev.o 00:02:19.827 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.086 LIB libspdk_event_scheduler.a 00:02:20.086 LIB libspdk_event_keyring.a 00:02:20.086 LIB libspdk_event_sock.a 00:02:20.086 LIB libspdk_event_vmd.a 00:02:20.086 LIB libspdk_event_vfu_tgt.a 00:02:20.086 LIB libspdk_event_vhost_blk.a 00:02:20.086 SO libspdk_event_scheduler.so.4.0 00:02:20.086 SO libspdk_event_keyring.so.1.0 00:02:20.086 LIB libspdk_event_fsdev.a 00:02:20.086 LIB libspdk_event_iobuf.a 00:02:20.086 SO libspdk_event_sock.so.5.0 00:02:20.086 SO libspdk_event_vfu_tgt.so.3.0 00:02:20.086 SO libspdk_event_vmd.so.6.0 00:02:20.086 SO libspdk_event_vhost_blk.so.3.0 00:02:20.086 SO libspdk_event_fsdev.so.1.0 00:02:20.086 SO libspdk_event_iobuf.so.3.0 00:02:20.086 SYMLINK libspdk_event_scheduler.so 00:02:20.086 SYMLINK libspdk_event_keyring.so 00:02:20.086 SYMLINK libspdk_event_vfu_tgt.so 00:02:20.086 SYMLINK libspdk_event_sock.so 00:02:20.086 SYMLINK libspdk_event_vmd.so 00:02:20.086 SYMLINK libspdk_event_vhost_blk.so 00:02:20.086 SYMLINK libspdk_event_fsdev.so 00:02:20.086 SYMLINK libspdk_event_iobuf.so 00:02:20.345 CC module/event/subsystems/accel/accel.o 00:02:20.604 LIB libspdk_event_accel.a 00:02:20.604 SO libspdk_event_accel.so.6.0 00:02:20.604 SYMLINK libspdk_event_accel.so 00:02:21.172 CC module/event/subsystems/bdev/bdev.o 00:02:21.172 LIB libspdk_event_bdev.a 00:02:21.172 SO libspdk_event_bdev.so.6.0 00:02:21.172 SYMLINK libspdk_event_bdev.so 00:02:21.739 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:21.739 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.739 CC module/event/subsystems/scsi/scsi.o 00:02:21.739 CC module/event/subsystems/nbd/nbd.o 00:02:21.739 CC module/event/subsystems/ublk/ublk.o 00:02:21.739 LIB libspdk_event_nbd.a 00:02:21.739 LIB libspdk_event_ublk.a 00:02:21.739 LIB libspdk_event_scsi.a 00:02:21.739 SO libspdk_event_nbd.so.6.0 00:02:21.739 SO libspdk_event_ublk.so.3.0 00:02:21.739 SO libspdk_event_scsi.so.6.0 00:02:21.739 LIB libspdk_event_nvmf.a 00:02:21.739 SYMLINK libspdk_event_nbd.so 00:02:21.739 SYMLINK libspdk_event_ublk.so 00:02:21.739 SO libspdk_event_nvmf.so.6.0 00:02:21.999 SYMLINK libspdk_event_scsi.so 00:02:21.999 SYMLINK libspdk_event_nvmf.so 00:02:22.258 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:22.258 CC module/event/subsystems/iscsi/iscsi.o 00:02:22.258 LIB libspdk_event_vhost_scsi.a 00:02:22.258 LIB libspdk_event_iscsi.a 00:02:22.517 SO libspdk_event_vhost_scsi.so.3.0 00:02:22.517 SO libspdk_event_iscsi.so.6.0 00:02:22.517 SYMLINK libspdk_event_vhost_scsi.so 00:02:22.517 SYMLINK libspdk_event_iscsi.so 00:02:22.776 SO libspdk.so.6.0 00:02:22.776 SYMLINK libspdk.so 00:02:23.038 CC app/trace_record/trace_record.o 00:02:23.038 CXX app/trace/trace.o 00:02:23.038 CC app/spdk_top/spdk_top.o 00:02:23.038 CC test/rpc_client/rpc_client_test.o 00:02:23.038 CC app/spdk_lspci/spdk_lspci.o 00:02:23.038 CC app/spdk_nvme_discover/discovery_aer.o 00:02:23.038 TEST_HEADER include/spdk/accel_module.h 00:02:23.038 TEST_HEADER include/spdk/accel.h 00:02:23.038 CC app/spdk_nvme_identify/identify.o 00:02:23.038 TEST_HEADER include/spdk/assert.h 00:02:23.038 TEST_HEADER include/spdk/base64.h 00:02:23.038 TEST_HEADER include/spdk/barrier.h 00:02:23.038 TEST_HEADER include/spdk/bdev_zone.h 00:02:23.038 TEST_HEADER include/spdk/bdev.h 00:02:23.038 TEST_HEADER include/spdk/bit_array.h 00:02:23.038 TEST_HEADER include/spdk/bdev_module.h 00:02:23.038 CC app/spdk_nvme_perf/perf.o 00:02:23.038 TEST_HEADER include/spdk/bit_pool.h 00:02:23.038 TEST_HEADER include/spdk/blob_bdev.h 00:02:23.038 TEST_HEADER include/spdk/blobfs.h 00:02:23.038 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:23.038 TEST_HEADER include/spdk/conf.h 00:02:23.038 TEST_HEADER include/spdk/blob.h 00:02:23.038 TEST_HEADER include/spdk/config.h 00:02:23.038 TEST_HEADER include/spdk/cpuset.h 00:02:23.038 TEST_HEADER include/spdk/crc32.h 00:02:23.038 TEST_HEADER include/spdk/crc64.h 00:02:23.038 TEST_HEADER include/spdk/dif.h 00:02:23.038 TEST_HEADER include/spdk/crc16.h 00:02:23.038 TEST_HEADER include/spdk/endian.h 00:02:23.038 TEST_HEADER include/spdk/dma.h 00:02:23.038 TEST_HEADER include/spdk/env.h 00:02:23.038 TEST_HEADER include/spdk/fd_group.h 00:02:23.038 TEST_HEADER include/spdk/event.h 00:02:23.038 TEST_HEADER include/spdk/file.h 00:02:23.038 TEST_HEADER include/spdk/fd.h 00:02:23.038 TEST_HEADER include/spdk/env_dpdk.h 00:02:23.038 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:23.038 TEST_HEADER include/spdk/fsdev_module.h 00:02:23.038 TEST_HEADER include/spdk/ftl.h 00:02:23.038 TEST_HEADER include/spdk/gpt_spec.h 00:02:23.038 TEST_HEADER include/spdk/fsdev.h 00:02:23.038 TEST_HEADER include/spdk/hexlify.h 00:02:23.038 TEST_HEADER include/spdk/histogram_data.h 00:02:23.038 TEST_HEADER include/spdk/idxd_spec.h 00:02:23.038 TEST_HEADER include/spdk/idxd.h 00:02:23.038 TEST_HEADER include/spdk/ioat.h 00:02:23.038 TEST_HEADER include/spdk/init.h 00:02:23.038 TEST_HEADER include/spdk/ioat_spec.h 00:02:23.038 TEST_HEADER include/spdk/json.h 00:02:23.038 TEST_HEADER include/spdk/iscsi_spec.h 00:02:23.038 TEST_HEADER include/spdk/jsonrpc.h 00:02:23.038 TEST_HEADER include/spdk/keyring.h 00:02:23.038 CC app/spdk_dd/spdk_dd.o 00:02:23.038 TEST_HEADER include/spdk/keyring_module.h 00:02:23.038 TEST_HEADER include/spdk/log.h 00:02:23.038 TEST_HEADER include/spdk/lvol.h 00:02:23.038 TEST_HEADER include/spdk/memory.h 00:02:23.038 TEST_HEADER include/spdk/likely.h 00:02:23.038 TEST_HEADER include/spdk/mmio.h 00:02:23.038 TEST_HEADER include/spdk/md5.h 00:02:23.038 CC app/nvmf_tgt/nvmf_main.o 00:02:23.038 TEST_HEADER include/spdk/notify.h 00:02:23.038 TEST_HEADER include/spdk/nbd.h 00:02:23.038 TEST_HEADER include/spdk/nvme.h 00:02:23.038 TEST_HEADER include/spdk/net.h 00:02:23.038 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:23.038 TEST_HEADER include/spdk/nvme_intel.h 00:02:23.038 TEST_HEADER include/spdk/nvme_spec.h 00:02:23.038 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:23.038 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:23.038 TEST_HEADER include/spdk/nvme_zns.h 00:02:23.038 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:23.038 TEST_HEADER include/spdk/nvmf_transport.h 00:02:23.038 TEST_HEADER include/spdk/nvmf.h 00:02:23.038 TEST_HEADER include/spdk/opal.h 00:02:23.038 TEST_HEADER include/spdk/nvmf_spec.h 00:02:23.038 TEST_HEADER include/spdk/pci_ids.h 00:02:23.038 TEST_HEADER include/spdk/opal_spec.h 00:02:23.038 TEST_HEADER include/spdk/pipe.h 00:02:23.038 TEST_HEADER include/spdk/queue.h 00:02:23.038 TEST_HEADER include/spdk/reduce.h 00:02:23.038 TEST_HEADER include/spdk/rpc.h 00:02:23.038 TEST_HEADER include/spdk/scheduler.h 00:02:23.038 TEST_HEADER include/spdk/scsi.h 00:02:23.038 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.038 CC app/spdk_tgt/spdk_tgt.o 00:02:23.038 TEST_HEADER include/spdk/stdinc.h 00:02:23.038 TEST_HEADER include/spdk/sock.h 00:02:23.038 TEST_HEADER include/spdk/string.h 00:02:23.038 TEST_HEADER include/spdk/thread.h 00:02:23.038 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.038 TEST_HEADER include/spdk/trace.h 00:02:23.038 TEST_HEADER include/spdk/trace_parser.h 00:02:23.038 TEST_HEADER include/spdk/tree.h 00:02:23.038 TEST_HEADER include/spdk/util.h 00:02:23.038 TEST_HEADER include/spdk/uuid.h 00:02:23.038 TEST_HEADER include/spdk/ublk.h 00:02:23.038 TEST_HEADER include/spdk/version.h 00:02:23.038 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.038 TEST_HEADER include/spdk/vmd.h 00:02:23.038 TEST_HEADER include/spdk/vhost.h 00:02:23.038 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.038 TEST_HEADER include/spdk/xor.h 00:02:23.038 TEST_HEADER include/spdk/zipf.h 00:02:23.038 CXX test/cpp_headers/accel.o 00:02:23.038 CXX test/cpp_headers/accel_module.o 00:02:23.038 CXX test/cpp_headers/assert.o 00:02:23.038 CXX test/cpp_headers/barrier.o 00:02:23.038 CXX test/cpp_headers/base64.o 00:02:23.038 CXX test/cpp_headers/bdev.o 00:02:23.038 CXX test/cpp_headers/bdev_module.o 00:02:23.038 CXX test/cpp_headers/bdev_zone.o 00:02:23.038 CXX test/cpp_headers/bit_array.o 00:02:23.038 CXX test/cpp_headers/bit_pool.o 00:02:23.038 CXX test/cpp_headers/blob_bdev.o 00:02:23.038 CXX test/cpp_headers/blobfs_bdev.o 00:02:23.038 CXX test/cpp_headers/blob.o 00:02:23.038 CXX test/cpp_headers/blobfs.o 00:02:23.038 CXX test/cpp_headers/config.o 00:02:23.038 CXX test/cpp_headers/conf.o 00:02:23.038 CXX test/cpp_headers/crc16.o 00:02:23.038 CXX test/cpp_headers/crc32.o 00:02:23.038 CXX test/cpp_headers/cpuset.o 00:02:23.038 CXX test/cpp_headers/dif.o 00:02:23.038 CXX test/cpp_headers/crc64.o 00:02:23.038 CXX test/cpp_headers/dma.o 00:02:23.038 CXX test/cpp_headers/env_dpdk.o 00:02:23.038 CXX test/cpp_headers/env.o 00:02:23.038 CXX test/cpp_headers/endian.o 00:02:23.038 CXX test/cpp_headers/event.o 00:02:23.038 CXX test/cpp_headers/file.o 00:02:23.038 CXX test/cpp_headers/fd_group.o 00:02:23.038 CXX test/cpp_headers/fsdev_module.o 00:02:23.038 CXX test/cpp_headers/fsdev.o 00:02:23.038 CXX test/cpp_headers/fd.o 00:02:23.038 CXX test/cpp_headers/ftl.o 00:02:23.038 CXX test/cpp_headers/hexlify.o 00:02:23.038 CXX test/cpp_headers/histogram_data.o 00:02:23.038 CXX test/cpp_headers/idxd.o 00:02:23.038 CXX test/cpp_headers/gpt_spec.o 00:02:23.310 CXX test/cpp_headers/idxd_spec.o 00:02:23.310 CXX test/cpp_headers/init.o 00:02:23.310 CXX test/cpp_headers/iscsi_spec.o 00:02:23.310 CXX test/cpp_headers/json.o 00:02:23.310 CXX test/cpp_headers/ioat.o 00:02:23.310 CXX test/cpp_headers/ioat_spec.o 00:02:23.310 CXX test/cpp_headers/jsonrpc.o 00:02:23.310 CXX test/cpp_headers/keyring.o 00:02:23.310 CXX test/cpp_headers/keyring_module.o 00:02:23.310 CXX test/cpp_headers/likely.o 00:02:23.310 CXX test/cpp_headers/log.o 00:02:23.310 CXX test/cpp_headers/lvol.o 00:02:23.310 CXX test/cpp_headers/mmio.o 00:02:23.310 CXX test/cpp_headers/md5.o 00:02:23.310 CXX test/cpp_headers/memory.o 00:02:23.310 CXX test/cpp_headers/nbd.o 00:02:23.310 CXX test/cpp_headers/net.o 00:02:23.310 CXX test/cpp_headers/nvme.o 00:02:23.310 CXX test/cpp_headers/notify.o 00:02:23.310 CXX test/cpp_headers/nvme_ocssd.o 00:02:23.310 CXX test/cpp_headers/nvme_spec.o 00:02:23.310 CXX test/cpp_headers/nvme_intel.o 00:02:23.310 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:23.310 CXX test/cpp_headers/nvme_zns.o 00:02:23.310 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:23.310 CXX test/cpp_headers/nvmf_cmd.o 00:02:23.310 CXX test/cpp_headers/nvmf.o 00:02:23.310 CXX test/cpp_headers/nvmf_transport.o 00:02:23.310 CXX test/cpp_headers/nvmf_spec.o 00:02:23.310 CXX test/cpp_headers/opal.o 00:02:23.310 CXX test/cpp_headers/opal_spec.o 00:02:23.310 CC examples/util/zipf/zipf.o 00:02:23.310 CC test/thread/poller_perf/poller_perf.o 00:02:23.310 CC test/env/memory/memory_ut.o 00:02:23.310 CC examples/ioat/perf/perf.o 00:02:23.310 CC app/fio/nvme/fio_plugin.o 00:02:23.310 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:23.310 CXX test/cpp_headers/pci_ids.o 00:02:23.310 CC test/env/pci/pci_ut.o 00:02:23.310 CC test/app/histogram_perf/histogram_perf.o 00:02:23.310 CC test/env/vtophys/vtophys.o 00:02:23.310 CC examples/ioat/verify/verify.o 00:02:23.310 CC test/app/stub/stub.o 00:02:23.310 CC test/app/jsoncat/jsoncat.o 00:02:23.310 LINK spdk_lspci 00:02:23.310 CC app/fio/bdev/fio_plugin.o 00:02:23.310 CC test/dma/test_dma/test_dma.o 00:02:23.311 CC test/app/bdev_svc/bdev_svc.o 00:02:23.580 LINK rpc_client_test 00:02:23.580 LINK spdk_nvme_discover 00:02:23.580 LINK spdk_tgt 00:02:23.845 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:23.845 LINK interrupt_tgt 00:02:23.845 LINK nvmf_tgt 00:02:23.845 LINK zipf 00:02:23.845 CC test/env/mem_callbacks/mem_callbacks.o 00:02:23.845 CXX test/cpp_headers/pipe.o 00:02:23.845 CXX test/cpp_headers/queue.o 00:02:23.845 CXX test/cpp_headers/reduce.o 00:02:23.845 CXX test/cpp_headers/rpc.o 00:02:23.845 CXX test/cpp_headers/scheduler.o 00:02:23.845 CXX test/cpp_headers/scsi_spec.o 00:02:23.845 CXX test/cpp_headers/scsi.o 00:02:23.845 CXX test/cpp_headers/sock.o 00:02:23.845 LINK spdk_trace_record 00:02:23.845 CXX test/cpp_headers/string.o 00:02:23.845 CXX test/cpp_headers/stdinc.o 00:02:23.845 CXX test/cpp_headers/thread.o 00:02:23.845 LINK env_dpdk_post_init 00:02:23.845 CXX test/cpp_headers/trace.o 00:02:23.845 CXX test/cpp_headers/trace_parser.o 00:02:23.845 CXX test/cpp_headers/tree.o 00:02:23.845 CXX test/cpp_headers/util.o 00:02:23.845 CXX test/cpp_headers/ublk.o 00:02:23.845 CXX test/cpp_headers/version.o 00:02:23.845 CXX test/cpp_headers/uuid.o 00:02:23.845 CXX test/cpp_headers/vfio_user_pci.o 00:02:23.845 CXX test/cpp_headers/vfio_user_spec.o 00:02:23.845 CXX test/cpp_headers/vmd.o 00:02:23.845 CXX test/cpp_headers/vhost.o 00:02:23.845 CXX test/cpp_headers/xor.o 00:02:23.845 CXX test/cpp_headers/zipf.o 00:02:23.845 LINK poller_perf 00:02:23.845 LINK iscsi_tgt 00:02:23.845 LINK jsoncat 00:02:23.845 LINK histogram_perf 00:02:23.845 LINK vtophys 00:02:23.845 LINK verify 00:02:23.845 LINK spdk_dd 00:02:24.104 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.104 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.104 LINK stub 00:02:24.104 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.104 LINK spdk_trace 00:02:24.104 LINK ioat_perf 00:02:24.104 LINK bdev_svc 00:02:24.104 LINK pci_ut 00:02:24.362 LINK nvme_fuzz 00:02:24.362 LINK spdk_nvme_perf 00:02:24.362 LINK spdk_nvme 00:02:24.362 CC examples/idxd/perf/perf.o 00:02:24.362 CC examples/sock/hello_world/hello_sock.o 00:02:24.362 CC examples/vmd/lsvmd/lsvmd.o 00:02:24.362 CC examples/vmd/led/led.o 00:02:24.362 LINK spdk_bdev 00:02:24.362 CC test/event/event_perf/event_perf.o 00:02:24.362 CC test/event/reactor/reactor.o 00:02:24.362 CC test/event/reactor_perf/reactor_perf.o 00:02:24.362 CC test/event/app_repeat/app_repeat.o 00:02:24.362 CC examples/thread/thread/thread_ex.o 00:02:24.362 CC test/event/scheduler/scheduler.o 00:02:24.362 LINK test_dma 00:02:24.362 LINK mem_callbacks 00:02:24.362 LINK vhost_fuzz 00:02:24.362 LINK lsvmd 00:02:24.362 LINK led 00:02:24.621 LINK reactor 00:02:24.621 LINK event_perf 00:02:24.621 CC app/vhost/vhost.o 00:02:24.621 LINK reactor_perf 00:02:24.621 LINK spdk_nvme_identify 00:02:24.621 LINK app_repeat 00:02:24.621 LINK hello_sock 00:02:24.621 LINK spdk_top 00:02:24.621 LINK thread 00:02:24.621 LINK idxd_perf 00:02:24.621 LINK scheduler 00:02:24.621 LINK vhost 00:02:24.880 LINK memory_ut 00:02:24.880 CC test/nvme/aer/aer.o 00:02:24.880 CC test/nvme/sgl/sgl.o 00:02:24.880 CC test/nvme/err_injection/err_injection.o 00:02:24.880 CC test/nvme/reset/reset.o 00:02:24.880 CC test/nvme/fused_ordering/fused_ordering.o 00:02:24.880 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:24.880 CC test/nvme/fdp/fdp.o 00:02:24.880 CC test/nvme/e2edp/nvme_dp.o 00:02:24.880 CC test/nvme/connect_stress/connect_stress.o 00:02:24.880 CC test/nvme/simple_copy/simple_copy.o 00:02:24.880 CC test/nvme/compliance/nvme_compliance.o 00:02:24.880 CC test/nvme/startup/startup.o 00:02:24.880 CC test/nvme/overhead/overhead.o 00:02:24.880 CC test/nvme/cuse/cuse.o 00:02:24.880 CC test/nvme/reserve/reserve.o 00:02:24.880 CC test/nvme/boot_partition/boot_partition.o 00:02:24.880 CC test/blobfs/mkfs/mkfs.o 00:02:24.880 CC test/accel/dif/dif.o 00:02:25.140 CC examples/nvme/abort/abort.o 00:02:25.140 CC examples/nvme/arbitration/arbitration.o 00:02:25.140 CC examples/nvme/hotplug/hotplug.o 00:02:25.140 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.140 CC examples/nvme/hello_world/hello_world.o 00:02:25.140 CC examples/nvme/reconnect/reconnect.o 00:02:25.140 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.140 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.140 CC test/lvol/esnap/esnap.o 00:02:25.140 CC examples/accel/perf/accel_perf.o 00:02:25.140 CC examples/blob/cli/blobcli.o 00:02:25.140 CC examples/blob/hello_world/hello_blob.o 00:02:25.140 LINK connect_stress 00:02:25.140 LINK fused_ordering 00:02:25.140 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:25.140 LINK startup 00:02:25.140 LINK boot_partition 00:02:25.140 LINK err_injection 00:02:25.140 LINK doorbell_aers 00:02:25.140 LINK reserve 00:02:25.140 LINK simple_copy 00:02:25.140 LINK mkfs 00:02:25.140 LINK nvme_dp 00:02:25.140 LINK aer 00:02:25.140 LINK reset 00:02:25.140 LINK sgl 00:02:25.140 LINK pmr_persistence 00:02:25.140 LINK cmb_copy 00:02:25.140 LINK fdp 00:02:25.140 LINK overhead 00:02:25.399 LINK nvme_compliance 00:02:25.399 LINK hello_world 00:02:25.399 LINK hotplug 00:02:25.399 LINK arbitration 00:02:25.399 LINK hello_blob 00:02:25.399 LINK reconnect 00:02:25.399 LINK abort 00:02:25.399 LINK hello_fsdev 00:02:25.399 LINK iscsi_fuzz 00:02:25.399 LINK nvme_manage 00:02:25.658 LINK accel_perf 00:02:25.658 LINK blobcli 00:02:25.658 LINK dif 00:02:26.227 LINK cuse 00:02:26.227 CC examples/bdev/hello_world/hello_bdev.o 00:02:26.227 CC examples/bdev/bdevperf/bdevperf.o 00:02:26.227 CC test/bdev/bdevio/bdevio.o 00:02:26.227 LINK hello_bdev 00:02:26.486 LINK bdevio 00:02:26.745 LINK bdevperf 00:02:27.314 CC examples/nvmf/nvmf/nvmf.o 00:02:27.572 LINK nvmf 00:02:28.953 LINK esnap 00:02:28.953 00:02:28.953 real 0m56.668s 00:02:28.953 user 8m25.353s 00:02:28.953 sys 3m50.283s 00:02:28.953 10:16:02 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:28.953 10:16:02 make -- common/autotest_common.sh@10 -- $ set +x 00:02:28.953 ************************************ 00:02:28.953 END TEST make 00:02:28.953 ************************************ 00:02:28.953 10:16:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:28.953 10:16:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:28.953 10:16:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:28.953 10:16:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.953 10:16:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:28.953 10:16:02 -- pm/common@44 -- $ pid=1242097 00:02:28.953 10:16:02 -- pm/common@50 -- $ kill -TERM 1242097 00:02:28.953 10:16:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.953 10:16:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:28.953 10:16:02 -- pm/common@44 -- $ pid=1242099 00:02:28.953 10:16:02 -- pm/common@50 -- $ kill -TERM 1242099 00:02:28.953 10:16:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.953 10:16:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:28.953 10:16:02 -- pm/common@44 -- $ pid=1242101 00:02:28.953 10:16:02 -- pm/common@50 -- $ kill -TERM 1242101 00:02:28.953 10:16:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.953 10:16:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:28.953 10:16:02 -- pm/common@44 -- $ pid=1242126 00:02:28.953 10:16:02 -- pm/common@50 -- $ sudo -E kill -TERM 1242126 00:02:28.953 10:16:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:28.953 10:16:02 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:29.214 10:16:02 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:29.214 10:16:02 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:29.214 10:16:02 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:29.214 10:16:03 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:29.214 10:16:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:29.214 10:16:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:29.214 10:16:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:29.214 10:16:03 -- scripts/common.sh@336 -- # IFS=.-: 00:02:29.214 10:16:03 -- scripts/common.sh@336 -- # read -ra ver1 00:02:29.214 10:16:03 -- scripts/common.sh@337 -- # IFS=.-: 00:02:29.214 10:16:03 -- scripts/common.sh@337 -- # read -ra ver2 00:02:29.214 10:16:03 -- scripts/common.sh@338 -- # local 'op=<' 00:02:29.214 10:16:03 -- scripts/common.sh@340 -- # ver1_l=2 00:02:29.214 10:16:03 -- scripts/common.sh@341 -- # ver2_l=1 00:02:29.214 10:16:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:29.214 10:16:03 -- scripts/common.sh@344 -- # case "$op" in 00:02:29.214 10:16:03 -- scripts/common.sh@345 -- # : 1 00:02:29.214 10:16:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:29.214 10:16:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.214 10:16:03 -- scripts/common.sh@365 -- # decimal 1 00:02:29.214 10:16:03 -- scripts/common.sh@353 -- # local d=1 00:02:29.214 10:16:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:29.214 10:16:03 -- scripts/common.sh@355 -- # echo 1 00:02:29.214 10:16:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:29.214 10:16:03 -- scripts/common.sh@366 -- # decimal 2 00:02:29.214 10:16:03 -- scripts/common.sh@353 -- # local d=2 00:02:29.214 10:16:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:29.214 10:16:03 -- scripts/common.sh@355 -- # echo 2 00:02:29.214 10:16:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:29.214 10:16:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:29.214 10:16:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:29.214 10:16:03 -- scripts/common.sh@368 -- # return 0 00:02:29.214 10:16:03 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:29.214 10:16:03 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:29.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.214 --rc genhtml_branch_coverage=1 00:02:29.214 --rc genhtml_function_coverage=1 00:02:29.214 --rc genhtml_legend=1 00:02:29.214 --rc geninfo_all_blocks=1 00:02:29.214 --rc geninfo_unexecuted_blocks=1 00:02:29.214 00:02:29.214 ' 00:02:29.214 10:16:03 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:29.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.214 --rc genhtml_branch_coverage=1 00:02:29.214 --rc genhtml_function_coverage=1 00:02:29.214 --rc genhtml_legend=1 00:02:29.214 --rc geninfo_all_blocks=1 00:02:29.214 --rc geninfo_unexecuted_blocks=1 00:02:29.214 00:02:29.214 ' 00:02:29.214 10:16:03 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:29.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.214 --rc genhtml_branch_coverage=1 00:02:29.214 --rc genhtml_function_coverage=1 00:02:29.214 --rc genhtml_legend=1 00:02:29.214 --rc geninfo_all_blocks=1 00:02:29.214 --rc geninfo_unexecuted_blocks=1 00:02:29.214 00:02:29.214 ' 00:02:29.214 10:16:03 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:29.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.214 --rc genhtml_branch_coverage=1 00:02:29.214 --rc genhtml_function_coverage=1 00:02:29.214 --rc genhtml_legend=1 00:02:29.214 --rc geninfo_all_blocks=1 00:02:29.214 --rc geninfo_unexecuted_blocks=1 00:02:29.214 00:02:29.214 ' 00:02:29.214 10:16:03 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:29.214 10:16:03 -- nvmf/common.sh@7 -- # uname -s 00:02:29.214 10:16:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:29.214 10:16:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:29.214 10:16:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:29.214 10:16:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:29.214 10:16:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:29.214 10:16:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:29.214 10:16:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:29.214 10:16:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:29.214 10:16:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:29.214 10:16:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:29.214 10:16:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:02:29.214 10:16:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:02:29.214 10:16:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:29.214 10:16:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:29.214 10:16:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:29.214 10:16:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:29.214 10:16:03 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:29.214 10:16:03 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:29.214 10:16:03 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:29.214 10:16:03 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.214 10:16:03 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.214 10:16:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.214 10:16:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.215 10:16:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.215 10:16:03 -- paths/export.sh@5 -- # export PATH 00:02:29.215 10:16:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.215 10:16:03 -- nvmf/common.sh@51 -- # : 0 00:02:29.215 10:16:03 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:29.215 10:16:03 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:29.215 10:16:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:29.215 10:16:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:29.215 10:16:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:29.215 10:16:03 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:29.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:29.215 10:16:03 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:29.215 10:16:03 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:29.215 10:16:03 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:29.215 10:16:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:29.215 10:16:03 -- spdk/autotest.sh@32 -- # uname -s 00:02:29.215 10:16:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:29.215 10:16:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:29.215 10:16:03 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.215 10:16:03 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:29.215 10:16:03 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.215 10:16:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:29.215 10:16:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:29.215 10:16:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:29.215 10:16:03 -- spdk/autotest.sh@48 -- # udevadm_pid=1306142 00:02:29.215 10:16:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:29.215 10:16:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:29.215 10:16:03 -- pm/common@17 -- # local monitor 00:02:29.215 10:16:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.215 10:16:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.215 10:16:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.215 10:16:03 -- pm/common@21 -- # date +%s 00:02:29.215 10:16:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.215 10:16:03 -- pm/common@21 -- # date +%s 00:02:29.215 10:16:03 -- pm/common@25 -- # sleep 1 00:02:29.215 10:16:03 -- pm/common@21 -- # date +%s 00:02:29.215 10:16:03 -- pm/common@21 -- # date +%s 00:02:29.215 10:16:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733994963 00:02:29.215 10:16:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733994963 00:02:29.215 10:16:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733994963 00:02:29.215 10:16:03 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733994963 00:02:29.215 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733994963_collect-cpu-load.pm.log 00:02:29.215 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733994963_collect-vmstat.pm.log 00:02:29.215 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733994963_collect-cpu-temp.pm.log 00:02:29.215 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733994963_collect-bmc-pm.bmc.pm.log 00:02:30.153 10:16:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:30.153 10:16:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:30.153 10:16:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:30.153 10:16:04 -- common/autotest_common.sh@10 -- # set +x 00:02:30.153 10:16:04 -- spdk/autotest.sh@59 -- # create_test_list 00:02:30.153 10:16:04 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:30.153 10:16:04 -- common/autotest_common.sh@10 -- # set +x 00:02:30.153 10:16:04 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:30.153 10:16:04 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.413 10:16:04 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.413 10:16:04 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:30.413 10:16:04 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.413 10:16:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:30.413 10:16:04 -- common/autotest_common.sh@1457 -- # uname 00:02:30.413 10:16:04 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:30.413 10:16:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:30.413 10:16:04 -- common/autotest_common.sh@1477 -- # uname 00:02:30.413 10:16:04 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:30.413 10:16:04 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:30.413 10:16:04 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:30.413 lcov: LCOV version 1.15 00:02:30.413 10:16:04 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:42.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:42.624 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:54.911 10:16:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:54.911 10:16:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:54.911 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:02:54.911 10:16:28 -- spdk/autotest.sh@78 -- # rm -f 00:02:54.911 10:16:28 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.202 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:58.202 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:58.202 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:58.202 10:16:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:58.202 10:16:31 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:58.202 10:16:31 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:58.202 10:16:31 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:02:58.202 10:16:31 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:02:58.202 10:16:31 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:02:58.202 10:16:31 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:02:58.202 10:16:31 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:02:58.202 10:16:31 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:02:58.202 10:16:31 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:58.202 10:16:31 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:58.202 10:16:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:58.202 10:16:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:58.202 10:16:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:58.202 10:16:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:58.202 10:16:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:58.202 10:16:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:58.202 10:16:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:58.202 10:16:31 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:58.202 No valid GPT data, bailing 00:02:58.202 10:16:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:58.202 10:16:32 -- scripts/common.sh@394 -- # pt= 00:02:58.202 10:16:32 -- scripts/common.sh@395 -- # return 1 00:02:58.202 10:16:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:58.202 1+0 records in 00:02:58.202 1+0 records out 00:02:58.202 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0038839 s, 270 MB/s 00:02:58.202 10:16:32 -- spdk/autotest.sh@105 -- # sync 00:02:58.202 10:16:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:58.202 10:16:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:58.202 10:16:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:03.480 10:16:37 -- spdk/autotest.sh@111 -- # uname -s 00:03:03.480 10:16:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:03.480 10:16:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:03.480 10:16:37 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:06.774 Hugepages 00:03:06.774 node hugesize free / total 00:03:06.774 node0 1048576kB 0 / 0 00:03:06.774 node0 2048kB 0 / 0 00:03:06.774 node1 1048576kB 0 / 0 00:03:06.774 node1 2048kB 0 / 0 00:03:06.774 00:03:06.774 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.774 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:06.774 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:06.774 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:06.774 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:06.774 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:06.774 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:06.774 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:06.774 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:06.774 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:06.774 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:06.774 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:06.774 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:06.774 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:06.774 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:06.774 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:06.774 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:06.774 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:06.774 10:16:40 -- spdk/autotest.sh@117 -- # uname -s 00:03:06.774 10:16:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:06.774 10:16:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:06.774 10:16:40 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.311 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:09.311 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:10.249 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:10.249 10:16:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:11.190 10:16:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:11.190 10:16:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:11.190 10:16:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:11.190 10:16:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:11.190 10:16:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:11.190 10:16:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:11.190 10:16:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:11.190 10:16:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:11.190 10:16:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:11.451 10:16:45 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:11.451 10:16:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:11.451 10:16:45 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.989 Waiting for block devices as requested 00:03:13.989 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:14.249 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:14.249 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:14.509 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:14.509 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:14.509 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:14.509 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:14.769 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:14.769 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:14.769 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:15.028 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:15.028 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:15.028 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:15.028 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:15.287 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:15.287 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:15.287 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:15.547 10:16:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:15.547 10:16:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:15.547 10:16:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:15.547 10:16:49 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:15.547 10:16:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:15.547 10:16:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:15.547 10:16:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:15.547 10:16:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:15.547 10:16:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:15.547 10:16:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:15.547 10:16:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:15.547 10:16:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:15.547 10:16:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:15.547 10:16:49 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:15.547 10:16:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:15.547 10:16:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:15.547 10:16:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:15.547 10:16:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:15.547 10:16:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:15.547 10:16:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:15.547 10:16:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:15.547 10:16:49 -- common/autotest_common.sh@1543 -- # continue 00:03:15.547 10:16:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:15.547 10:16:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:15.547 10:16:49 -- common/autotest_common.sh@10 -- # set +x 00:03:15.547 10:16:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:15.547 10:16:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:15.547 10:16:49 -- common/autotest_common.sh@10 -- # set +x 00:03:15.547 10:16:49 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.840 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:18.840 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:19.409 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:19.409 10:16:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:19.409 10:16:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:19.409 10:16:53 -- common/autotest_common.sh@10 -- # set +x 00:03:19.409 10:16:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:19.409 10:16:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:19.409 10:16:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:19.409 10:16:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:19.409 10:16:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:19.409 10:16:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:19.409 10:16:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:19.409 10:16:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:19.409 10:16:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:19.409 10:16:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:19.409 10:16:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:19.409 10:16:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:19.409 10:16:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:19.668 10:16:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:19.668 10:16:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:19.668 10:16:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:19.668 10:16:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:19.668 10:16:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:19.668 10:16:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:19.668 10:16:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:19.668 10:16:53 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:19.668 10:16:53 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:19.668 10:16:53 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:19.668 10:16:53 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1320061 00:03:19.668 10:16:53 -- common/autotest_common.sh@1585 -- # waitforlisten 1320061 00:03:19.668 10:16:53 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:19.668 10:16:53 -- common/autotest_common.sh@835 -- # '[' -z 1320061 ']' 00:03:19.668 10:16:53 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:19.668 10:16:53 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:19.668 10:16:53 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:19.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:19.668 10:16:53 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:19.668 10:16:53 -- common/autotest_common.sh@10 -- # set +x 00:03:19.668 [2024-12-12 10:16:53.515102] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:03:19.668 [2024-12-12 10:16:53.515151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320061 ] 00:03:19.668 [2024-12-12 10:16:53.590778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:19.668 [2024-12-12 10:16:53.632405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:19.927 10:16:53 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:19.927 10:16:53 -- common/autotest_common.sh@868 -- # return 0 00:03:19.927 10:16:53 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:19.927 10:16:53 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:19.927 10:16:53 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:23.212 nvme0n1 00:03:23.212 10:16:56 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:23.212 [2024-12-12 10:16:57.026984] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:23.212 [2024-12-12 10:16:57.027014] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:23.212 request: 00:03:23.212 { 00:03:23.212 "nvme_ctrlr_name": "nvme0", 00:03:23.212 "password": "test", 00:03:23.212 "method": "bdev_nvme_opal_revert", 00:03:23.212 "req_id": 1 00:03:23.212 } 00:03:23.212 Got JSON-RPC error response 00:03:23.212 response: 00:03:23.212 { 00:03:23.212 "code": -32603, 00:03:23.212 "message": "Internal error" 00:03:23.212 } 00:03:23.212 10:16:57 -- common/autotest_common.sh@1591 -- # true 00:03:23.212 10:16:57 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:23.212 10:16:57 -- common/autotest_common.sh@1595 -- # killprocess 1320061 00:03:23.212 10:16:57 -- common/autotest_common.sh@954 -- # '[' -z 1320061 ']' 00:03:23.212 10:16:57 -- common/autotest_common.sh@958 -- # kill -0 1320061 00:03:23.212 10:16:57 -- common/autotest_common.sh@959 -- # uname 00:03:23.212 10:16:57 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:23.212 10:16:57 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1320061 00:03:23.212 10:16:57 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:23.212 10:16:57 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:23.212 10:16:57 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1320061' 00:03:23.212 killing process with pid 1320061 00:03:23.212 10:16:57 -- common/autotest_common.sh@973 -- # kill 1320061 00:03:23.212 10:16:57 -- common/autotest_common.sh@978 -- # wait 1320061 00:03:25.117 10:16:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:25.117 10:16:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:25.117 10:16:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:25.117 10:16:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:25.117 10:16:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:25.117 10:16:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.117 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:03:25.117 10:16:58 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:25.117 10:16:58 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:25.117 10:16:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.117 10:16:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.117 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:03:25.117 ************************************ 00:03:25.117 START TEST env 00:03:25.117 ************************************ 00:03:25.117 10:16:58 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:25.117 * Looking for test storage... 00:03:25.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:25.117 10:16:58 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:25.117 10:16:58 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:25.117 10:16:58 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:25.117 10:16:58 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:25.117 10:16:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.117 10:16:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.117 10:16:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.117 10:16:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.117 10:16:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.117 10:16:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.117 10:16:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.117 10:16:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.117 10:16:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.117 10:16:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.117 10:16:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.117 10:16:58 env -- scripts/common.sh@344 -- # case "$op" in 00:03:25.117 10:16:58 env -- scripts/common.sh@345 -- # : 1 00:03:25.117 10:16:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.118 10:16:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.118 10:16:58 env -- scripts/common.sh@365 -- # decimal 1 00:03:25.118 10:16:58 env -- scripts/common.sh@353 -- # local d=1 00:03:25.118 10:16:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.118 10:16:58 env -- scripts/common.sh@355 -- # echo 1 00:03:25.118 10:16:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.118 10:16:58 env -- scripts/common.sh@366 -- # decimal 2 00:03:25.118 10:16:58 env -- scripts/common.sh@353 -- # local d=2 00:03:25.118 10:16:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.118 10:16:58 env -- scripts/common.sh@355 -- # echo 2 00:03:25.118 10:16:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.118 10:16:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.118 10:16:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.118 10:16:58 env -- scripts/common.sh@368 -- # return 0 00:03:25.118 10:16:58 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.118 10:16:58 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:25.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.118 --rc genhtml_branch_coverage=1 00:03:25.118 --rc genhtml_function_coverage=1 00:03:25.118 --rc genhtml_legend=1 00:03:25.118 --rc geninfo_all_blocks=1 00:03:25.118 --rc geninfo_unexecuted_blocks=1 00:03:25.118 00:03:25.118 ' 00:03:25.118 10:16:58 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:25.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.118 --rc genhtml_branch_coverage=1 00:03:25.118 --rc genhtml_function_coverage=1 00:03:25.118 --rc genhtml_legend=1 00:03:25.118 --rc geninfo_all_blocks=1 00:03:25.118 --rc geninfo_unexecuted_blocks=1 00:03:25.118 00:03:25.118 ' 00:03:25.118 10:16:58 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:25.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.118 --rc genhtml_branch_coverage=1 00:03:25.118 --rc genhtml_function_coverage=1 00:03:25.118 --rc genhtml_legend=1 00:03:25.118 --rc geninfo_all_blocks=1 00:03:25.118 --rc geninfo_unexecuted_blocks=1 00:03:25.118 00:03:25.118 ' 00:03:25.118 10:16:58 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:25.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.118 --rc genhtml_branch_coverage=1 00:03:25.118 --rc genhtml_function_coverage=1 00:03:25.118 --rc genhtml_legend=1 00:03:25.118 --rc geninfo_all_blocks=1 00:03:25.118 --rc geninfo_unexecuted_blocks=1 00:03:25.118 00:03:25.118 ' 00:03:25.118 10:16:58 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:25.118 10:16:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.118 10:16:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.118 10:16:58 env -- common/autotest_common.sh@10 -- # set +x 00:03:25.118 ************************************ 00:03:25.118 START TEST env_memory 00:03:25.118 ************************************ 00:03:25.118 10:16:58 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:25.118 00:03:25.118 00:03:25.118 CUnit - A unit testing framework for C - Version 2.1-3 00:03:25.118 http://cunit.sourceforge.net/ 00:03:25.118 00:03:25.118 00:03:25.118 Suite: memory 00:03:25.118 Test: alloc and free memory map ...[2024-12-12 10:16:58.940854] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:25.118 passed 00:03:25.118 Test: mem map translation ...[2024-12-12 10:16:58.958681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:25.118 [2024-12-12 10:16:58.958693] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:25.118 [2024-12-12 10:16:58.958726] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:25.118 [2024-12-12 10:16:58.958732] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:25.118 passed 00:03:25.118 Test: mem map registration ...[2024-12-12 10:16:58.994300] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:25.118 [2024-12-12 10:16:58.994313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:25.118 passed 00:03:25.118 Test: mem map adjacent registrations ...passed 00:03:25.118 00:03:25.118 Run Summary: Type Total Ran Passed Failed Inactive 00:03:25.118 suites 1 1 n/a 0 0 00:03:25.118 tests 4 4 4 0 0 00:03:25.118 asserts 152 152 152 0 n/a 00:03:25.118 00:03:25.118 Elapsed time = 0.125 seconds 00:03:25.118 00:03:25.118 real 0m0.133s 00:03:25.118 user 0m0.126s 00:03:25.118 sys 0m0.006s 00:03:25.118 10:16:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:25.118 10:16:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:25.118 ************************************ 00:03:25.118 END TEST env_memory 00:03:25.118 ************************************ 00:03:25.118 10:16:59 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:25.118 10:16:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.118 10:16:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.118 10:16:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:25.118 ************************************ 00:03:25.118 START TEST env_vtophys 00:03:25.118 ************************************ 00:03:25.118 10:16:59 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:25.118 EAL: lib.eal log level changed from notice to debug 00:03:25.118 EAL: Detected lcore 0 as core 0 on socket 0 00:03:25.118 EAL: Detected lcore 1 as core 1 on socket 0 00:03:25.118 EAL: Detected lcore 2 as core 2 on socket 0 00:03:25.118 EAL: Detected lcore 3 as core 3 on socket 0 00:03:25.118 EAL: Detected lcore 4 as core 4 on socket 0 00:03:25.118 EAL: Detected lcore 5 as core 5 on socket 0 00:03:25.118 EAL: Detected lcore 6 as core 6 on socket 0 00:03:25.118 EAL: Detected lcore 7 as core 8 on socket 0 00:03:25.118 EAL: Detected lcore 8 as core 9 on socket 0 00:03:25.118 EAL: Detected lcore 9 as core 10 on socket 0 00:03:25.118 EAL: Detected lcore 10 as core 11 on socket 0 00:03:25.118 EAL: Detected lcore 11 as core 12 on socket 0 00:03:25.118 EAL: Detected lcore 12 as core 13 on socket 0 00:03:25.118 EAL: Detected lcore 13 as core 16 on socket 0 00:03:25.118 EAL: Detected lcore 14 as core 17 on socket 0 00:03:25.118 EAL: Detected lcore 15 as core 18 on socket 0 00:03:25.118 EAL: Detected lcore 16 as core 19 on socket 0 00:03:25.118 EAL: Detected lcore 17 as core 20 on socket 0 00:03:25.118 EAL: Detected lcore 18 as core 21 on socket 0 00:03:25.118 EAL: Detected lcore 19 as core 25 on socket 0 00:03:25.118 EAL: Detected lcore 20 as core 26 on socket 0 00:03:25.118 EAL: Detected lcore 21 as core 27 on socket 0 00:03:25.118 EAL: Detected lcore 22 as core 28 on socket 0 00:03:25.118 EAL: Detected lcore 23 as core 29 on socket 0 00:03:25.118 EAL: Detected lcore 24 as core 0 on socket 1 00:03:25.118 EAL: Detected lcore 25 as core 1 on socket 1 00:03:25.118 EAL: Detected lcore 26 as core 2 on socket 1 00:03:25.118 EAL: Detected lcore 27 as core 3 on socket 1 00:03:25.118 EAL: Detected lcore 28 as core 4 on socket 1 00:03:25.118 EAL: Detected lcore 29 as core 5 on socket 1 00:03:25.118 EAL: Detected lcore 30 as core 6 on socket 1 00:03:25.118 EAL: Detected lcore 31 as core 8 on socket 1 00:03:25.118 EAL: Detected lcore 32 as core 9 on socket 1 00:03:25.118 EAL: Detected lcore 33 as core 10 on socket 1 00:03:25.118 EAL: Detected lcore 34 as core 11 on socket 1 00:03:25.118 EAL: Detected lcore 35 as core 12 on socket 1 00:03:25.118 EAL: Detected lcore 36 as core 13 on socket 1 00:03:25.118 EAL: Detected lcore 37 as core 16 on socket 1 00:03:25.118 EAL: Detected lcore 38 as core 17 on socket 1 00:03:25.118 EAL: Detected lcore 39 as core 18 on socket 1 00:03:25.118 EAL: Detected lcore 40 as core 19 on socket 1 00:03:25.118 EAL: Detected lcore 41 as core 20 on socket 1 00:03:25.118 EAL: Detected lcore 42 as core 21 on socket 1 00:03:25.118 EAL: Detected lcore 43 as core 25 on socket 1 00:03:25.118 EAL: Detected lcore 44 as core 26 on socket 1 00:03:25.118 EAL: Detected lcore 45 as core 27 on socket 1 00:03:25.118 EAL: Detected lcore 46 as core 28 on socket 1 00:03:25.118 EAL: Detected lcore 47 as core 29 on socket 1 00:03:25.118 EAL: Detected lcore 48 as core 0 on socket 0 00:03:25.118 EAL: Detected lcore 49 as core 1 on socket 0 00:03:25.118 EAL: Detected lcore 50 as core 2 on socket 0 00:03:25.118 EAL: Detected lcore 51 as core 3 on socket 0 00:03:25.118 EAL: Detected lcore 52 as core 4 on socket 0 00:03:25.118 EAL: Detected lcore 53 as core 5 on socket 0 00:03:25.118 EAL: Detected lcore 54 as core 6 on socket 0 00:03:25.118 EAL: Detected lcore 55 as core 8 on socket 0 00:03:25.118 EAL: Detected lcore 56 as core 9 on socket 0 00:03:25.118 EAL: Detected lcore 57 as core 10 on socket 0 00:03:25.118 EAL: Detected lcore 58 as core 11 on socket 0 00:03:25.118 EAL: Detected lcore 59 as core 12 on socket 0 00:03:25.118 EAL: Detected lcore 60 as core 13 on socket 0 00:03:25.118 EAL: Detected lcore 61 as core 16 on socket 0 00:03:25.118 EAL: Detected lcore 62 as core 17 on socket 0 00:03:25.118 EAL: Detected lcore 63 as core 18 on socket 0 00:03:25.118 EAL: Detected lcore 64 as core 19 on socket 0 00:03:25.118 EAL: Detected lcore 65 as core 20 on socket 0 00:03:25.118 EAL: Detected lcore 66 as core 21 on socket 0 00:03:25.118 EAL: Detected lcore 67 as core 25 on socket 0 00:03:25.118 EAL: Detected lcore 68 as core 26 on socket 0 00:03:25.118 EAL: Detected lcore 69 as core 27 on socket 0 00:03:25.118 EAL: Detected lcore 70 as core 28 on socket 0 00:03:25.118 EAL: Detected lcore 71 as core 29 on socket 0 00:03:25.118 EAL: Detected lcore 72 as core 0 on socket 1 00:03:25.118 EAL: Detected lcore 73 as core 1 on socket 1 00:03:25.118 EAL: Detected lcore 74 as core 2 on socket 1 00:03:25.118 EAL: Detected lcore 75 as core 3 on socket 1 00:03:25.118 EAL: Detected lcore 76 as core 4 on socket 1 00:03:25.118 EAL: Detected lcore 77 as core 5 on socket 1 00:03:25.118 EAL: Detected lcore 78 as core 6 on socket 1 00:03:25.118 EAL: Detected lcore 79 as core 8 on socket 1 00:03:25.118 EAL: Detected lcore 80 as core 9 on socket 1 00:03:25.118 EAL: Detected lcore 81 as core 10 on socket 1 00:03:25.119 EAL: Detected lcore 82 as core 11 on socket 1 00:03:25.119 EAL: Detected lcore 83 as core 12 on socket 1 00:03:25.119 EAL: Detected lcore 84 as core 13 on socket 1 00:03:25.119 EAL: Detected lcore 85 as core 16 on socket 1 00:03:25.119 EAL: Detected lcore 86 as core 17 on socket 1 00:03:25.119 EAL: Detected lcore 87 as core 18 on socket 1 00:03:25.119 EAL: Detected lcore 88 as core 19 on socket 1 00:03:25.119 EAL: Detected lcore 89 as core 20 on socket 1 00:03:25.119 EAL: Detected lcore 90 as core 21 on socket 1 00:03:25.119 EAL: Detected lcore 91 as core 25 on socket 1 00:03:25.119 EAL: Detected lcore 92 as core 26 on socket 1 00:03:25.119 EAL: Detected lcore 93 as core 27 on socket 1 00:03:25.119 EAL: Detected lcore 94 as core 28 on socket 1 00:03:25.119 EAL: Detected lcore 95 as core 29 on socket 1 00:03:25.119 EAL: Maximum logical cores by configuration: 128 00:03:25.119 EAL: Detected CPU lcores: 96 00:03:25.119 EAL: Detected NUMA nodes: 2 00:03:25.119 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:25.119 EAL: Detected shared linkage of DPDK 00:03:25.119 EAL: No shared files mode enabled, IPC will be disabled 00:03:25.378 EAL: Bus pci wants IOVA as 'DC' 00:03:25.378 EAL: Buses did not request a specific IOVA mode. 00:03:25.378 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:25.378 EAL: Selected IOVA mode 'VA' 00:03:25.378 EAL: Probing VFIO support... 00:03:25.378 EAL: IOMMU type 1 (Type 1) is supported 00:03:25.378 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:25.378 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:25.378 EAL: VFIO support initialized 00:03:25.378 EAL: Ask a virtual area of 0x2e000 bytes 00:03:25.378 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:25.378 EAL: Setting up physically contiguous memory... 00:03:25.378 EAL: Setting maximum number of open files to 524288 00:03:25.378 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:25.378 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:25.378 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:25.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.378 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:25.378 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.378 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:25.378 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:25.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.378 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:25.378 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.378 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:25.378 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:25.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.378 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:25.378 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.378 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:25.378 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:25.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.378 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:25.378 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.378 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:25.378 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:25.378 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:25.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.378 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:25.378 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.378 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:25.378 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:25.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.378 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:25.378 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.378 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:25.378 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:25.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.378 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:25.378 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.378 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:25.378 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:25.378 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.378 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:25.378 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.378 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.378 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:25.378 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:25.378 EAL: Hugepages will be freed exactly as allocated. 00:03:25.378 EAL: No shared files mode enabled, IPC is disabled 00:03:25.378 EAL: No shared files mode enabled, IPC is disabled 00:03:25.378 EAL: TSC frequency is ~2100000 KHz 00:03:25.378 EAL: Main lcore 0 is ready (tid=7fe221a66a00;cpuset=[0]) 00:03:25.378 EAL: Trying to obtain current memory policy. 00:03:25.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.378 EAL: Restoring previous memory policy: 0 00:03:25.378 EAL: request: mp_malloc_sync 00:03:25.378 EAL: No shared files mode enabled, IPC is disabled 00:03:25.378 EAL: Heap on socket 0 was expanded by 2MB 00:03:25.378 EAL: No shared files mode enabled, IPC is disabled 00:03:25.378 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:25.378 EAL: Mem event callback 'spdk:(nil)' registered 00:03:25.378 00:03:25.378 00:03:25.378 CUnit - A unit testing framework for C - Version 2.1-3 00:03:25.378 http://cunit.sourceforge.net/ 00:03:25.378 00:03:25.378 00:03:25.378 Suite: components_suite 00:03:25.379 Test: vtophys_malloc_test ...passed 00:03:25.379 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:25.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.379 EAL: Restoring previous memory policy: 4 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was expanded by 4MB 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was shrunk by 4MB 00:03:25.379 EAL: Trying to obtain current memory policy. 00:03:25.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.379 EAL: Restoring previous memory policy: 4 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was expanded by 6MB 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was shrunk by 6MB 00:03:25.379 EAL: Trying to obtain current memory policy. 00:03:25.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.379 EAL: Restoring previous memory policy: 4 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was expanded by 10MB 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was shrunk by 10MB 00:03:25.379 EAL: Trying to obtain current memory policy. 00:03:25.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.379 EAL: Restoring previous memory policy: 4 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was expanded by 18MB 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was shrunk by 18MB 00:03:25.379 EAL: Trying to obtain current memory policy. 00:03:25.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.379 EAL: Restoring previous memory policy: 4 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was expanded by 34MB 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was shrunk by 34MB 00:03:25.379 EAL: Trying to obtain current memory policy. 00:03:25.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.379 EAL: Restoring previous memory policy: 4 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was expanded by 66MB 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was shrunk by 66MB 00:03:25.379 EAL: Trying to obtain current memory policy. 00:03:25.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.379 EAL: Restoring previous memory policy: 4 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was expanded by 130MB 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was shrunk by 130MB 00:03:25.379 EAL: Trying to obtain current memory policy. 00:03:25.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.379 EAL: Restoring previous memory policy: 4 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.379 EAL: request: mp_malloc_sync 00:03:25.379 EAL: No shared files mode enabled, IPC is disabled 00:03:25.379 EAL: Heap on socket 0 was expanded by 258MB 00:03:25.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.638 EAL: request: mp_malloc_sync 00:03:25.638 EAL: No shared files mode enabled, IPC is disabled 00:03:25.638 EAL: Heap on socket 0 was shrunk by 258MB 00:03:25.638 EAL: Trying to obtain current memory policy. 00:03:25.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.638 EAL: Restoring previous memory policy: 4 00:03:25.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.638 EAL: request: mp_malloc_sync 00:03:25.638 EAL: No shared files mode enabled, IPC is disabled 00:03:25.638 EAL: Heap on socket 0 was expanded by 514MB 00:03:25.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.897 EAL: request: mp_malloc_sync 00:03:25.897 EAL: No shared files mode enabled, IPC is disabled 00:03:25.897 EAL: Heap on socket 0 was shrunk by 514MB 00:03:25.897 EAL: Trying to obtain current memory policy. 00:03:25.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.897 EAL: Restoring previous memory policy: 4 00:03:25.897 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.897 EAL: request: mp_malloc_sync 00:03:25.897 EAL: No shared files mode enabled, IPC is disabled 00:03:25.897 EAL: Heap on socket 0 was expanded by 1026MB 00:03:26.155 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.414 EAL: request: mp_malloc_sync 00:03:26.414 EAL: No shared files mode enabled, IPC is disabled 00:03:26.414 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:26.414 passed 00:03:26.414 00:03:26.414 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.414 suites 1 1 n/a 0 0 00:03:26.414 tests 2 2 2 0 0 00:03:26.414 asserts 497 497 497 0 n/a 00:03:26.414 00:03:26.414 Elapsed time = 0.965 seconds 00:03:26.414 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.414 EAL: request: mp_malloc_sync 00:03:26.414 EAL: No shared files mode enabled, IPC is disabled 00:03:26.414 EAL: Heap on socket 0 was shrunk by 2MB 00:03:26.414 EAL: No shared files mode enabled, IPC is disabled 00:03:26.414 EAL: No shared files mode enabled, IPC is disabled 00:03:26.414 EAL: No shared files mode enabled, IPC is disabled 00:03:26.414 00:03:26.414 real 0m1.099s 00:03:26.414 user 0m0.641s 00:03:26.414 sys 0m0.427s 00:03:26.414 10:17:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.414 10:17:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:26.414 ************************************ 00:03:26.414 END TEST env_vtophys 00:03:26.414 ************************************ 00:03:26.414 10:17:00 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:26.414 10:17:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.414 10:17:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.414 10:17:00 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.414 ************************************ 00:03:26.414 START TEST env_pci 00:03:26.414 ************************************ 00:03:26.414 10:17:00 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:26.414 00:03:26.414 00:03:26.414 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.414 http://cunit.sourceforge.net/ 00:03:26.414 00:03:26.414 00:03:26.414 Suite: pci 00:03:26.414 Test: pci_hook ...[2024-12-12 10:17:00.290957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1321302 has claimed it 00:03:26.414 EAL: Cannot find device (10000:00:01.0) 00:03:26.414 EAL: Failed to attach device on primary process 00:03:26.414 passed 00:03:26.414 00:03:26.414 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.414 suites 1 1 n/a 0 0 00:03:26.414 tests 1 1 1 0 0 00:03:26.414 asserts 25 25 25 0 n/a 00:03:26.414 00:03:26.414 Elapsed time = 0.025 seconds 00:03:26.414 00:03:26.414 real 0m0.043s 00:03:26.414 user 0m0.013s 00:03:26.414 sys 0m0.030s 00:03:26.414 10:17:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.414 10:17:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:26.414 ************************************ 00:03:26.414 END TEST env_pci 00:03:26.414 ************************************ 00:03:26.414 10:17:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:26.414 10:17:00 env -- env/env.sh@15 -- # uname 00:03:26.414 10:17:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:26.414 10:17:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:26.414 10:17:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.414 10:17:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:26.414 10:17:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.414 10:17:00 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.414 ************************************ 00:03:26.414 START TEST env_dpdk_post_init 00:03:26.414 ************************************ 00:03:26.414 10:17:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.414 EAL: Detected CPU lcores: 96 00:03:26.414 EAL: Detected NUMA nodes: 2 00:03:26.414 EAL: Detected shared linkage of DPDK 00:03:26.414 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.673 EAL: Selected IOVA mode 'VA' 00:03:26.673 EAL: VFIO support initialized 00:03:26.673 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.673 EAL: Using IOMMU type 1 (Type 1) 00:03:26.673 EAL: Ignore mapping IO port bar(1) 00:03:26.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:26.673 EAL: Ignore mapping IO port bar(1) 00:03:26.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:26.673 EAL: Ignore mapping IO port bar(1) 00:03:26.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:26.673 EAL: Ignore mapping IO port bar(1) 00:03:26.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:26.673 EAL: Ignore mapping IO port bar(1) 00:03:26.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:26.673 EAL: Ignore mapping IO port bar(1) 00:03:26.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:26.673 EAL: Ignore mapping IO port bar(1) 00:03:26.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:26.673 EAL: Ignore mapping IO port bar(1) 00:03:26.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:27.611 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:27.611 EAL: Ignore mapping IO port bar(1) 00:03:27.611 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:27.611 EAL: Ignore mapping IO port bar(1) 00:03:27.611 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:27.611 EAL: Ignore mapping IO port bar(1) 00:03:27.611 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:27.611 EAL: Ignore mapping IO port bar(1) 00:03:27.611 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:27.611 EAL: Ignore mapping IO port bar(1) 00:03:27.611 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:27.611 EAL: Ignore mapping IO port bar(1) 00:03:27.611 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:27.611 EAL: Ignore mapping IO port bar(1) 00:03:27.611 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:27.611 EAL: Ignore mapping IO port bar(1) 00:03:27.611 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:30.899 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:30.899 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:30.899 Starting DPDK initialization... 00:03:30.899 Starting SPDK post initialization... 00:03:30.899 SPDK NVMe probe 00:03:30.899 Attaching to 0000:5e:00.0 00:03:30.899 Attached to 0000:5e:00.0 00:03:30.899 Cleaning up... 00:03:30.899 00:03:30.899 real 0m4.361s 00:03:30.899 user 0m2.967s 00:03:30.899 sys 0m0.467s 00:03:30.899 10:17:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.899 10:17:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:30.899 ************************************ 00:03:30.899 END TEST env_dpdk_post_init 00:03:30.899 ************************************ 00:03:30.899 10:17:04 env -- env/env.sh@26 -- # uname 00:03:30.899 10:17:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:30.899 10:17:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:30.899 10:17:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.899 10:17:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.899 10:17:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:30.899 ************************************ 00:03:30.899 START TEST env_mem_callbacks 00:03:30.899 ************************************ 00:03:30.899 10:17:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:30.899 EAL: Detected CPU lcores: 96 00:03:30.899 EAL: Detected NUMA nodes: 2 00:03:30.899 EAL: Detected shared linkage of DPDK 00:03:30.899 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:30.899 EAL: Selected IOVA mode 'VA' 00:03:30.899 EAL: VFIO support initialized 00:03:30.899 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:30.899 00:03:30.899 00:03:30.899 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.899 http://cunit.sourceforge.net/ 00:03:30.899 00:03:30.899 00:03:30.899 Suite: memory 00:03:30.899 Test: test ... 00:03:30.899 register 0x200000200000 2097152 00:03:30.899 malloc 3145728 00:03:30.899 register 0x200000400000 4194304 00:03:30.899 buf 0x200000500000 len 3145728 PASSED 00:03:30.899 malloc 64 00:03:30.899 buf 0x2000004fff40 len 64 PASSED 00:03:30.899 malloc 4194304 00:03:30.899 register 0x200000800000 6291456 00:03:30.899 buf 0x200000a00000 len 4194304 PASSED 00:03:30.899 free 0x200000500000 3145728 00:03:30.899 free 0x2000004fff40 64 00:03:30.899 unregister 0x200000400000 4194304 PASSED 00:03:30.899 free 0x200000a00000 4194304 00:03:30.899 unregister 0x200000800000 6291456 PASSED 00:03:30.899 malloc 8388608 00:03:30.899 register 0x200000400000 10485760 00:03:30.899 buf 0x200000600000 len 8388608 PASSED 00:03:30.899 free 0x200000600000 8388608 00:03:30.899 unregister 0x200000400000 10485760 PASSED 00:03:30.899 passed 00:03:30.899 00:03:30.899 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.899 suites 1 1 n/a 0 0 00:03:30.899 tests 1 1 1 0 0 00:03:30.899 asserts 15 15 15 0 n/a 00:03:30.899 00:03:30.899 Elapsed time = 0.008 seconds 00:03:30.899 00:03:30.899 real 0m0.059s 00:03:30.899 user 0m0.018s 00:03:30.899 sys 0m0.041s 00:03:30.899 10:17:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.899 10:17:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:30.899 ************************************ 00:03:30.899 END TEST env_mem_callbacks 00:03:30.899 ************************************ 00:03:31.158 00:03:31.158 real 0m6.226s 00:03:31.158 user 0m3.988s 00:03:31.158 sys 0m1.316s 00:03:31.158 10:17:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.158 10:17:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.158 ************************************ 00:03:31.158 END TEST env 00:03:31.158 ************************************ 00:03:31.158 10:17:04 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:31.158 10:17:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.158 10:17:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.158 10:17:04 -- common/autotest_common.sh@10 -- # set +x 00:03:31.158 ************************************ 00:03:31.158 START TEST rpc 00:03:31.158 ************************************ 00:03:31.158 10:17:04 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:31.158 * Looking for test storage... 00:03:31.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:31.158 10:17:05 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:31.158 10:17:05 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:31.158 10:17:05 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:31.158 10:17:05 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:31.158 10:17:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.158 10:17:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.158 10:17:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.158 10:17:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.158 10:17:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.158 10:17:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.158 10:17:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.158 10:17:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.158 10:17:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.158 10:17:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.158 10:17:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.158 10:17:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:31.158 10:17:05 rpc -- scripts/common.sh@345 -- # : 1 00:03:31.158 10:17:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.158 10:17:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.158 10:17:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:31.158 10:17:05 rpc -- scripts/common.sh@353 -- # local d=1 00:03:31.158 10:17:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.158 10:17:05 rpc -- scripts/common.sh@355 -- # echo 1 00:03:31.158 10:17:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.158 10:17:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:31.158 10:17:05 rpc -- scripts/common.sh@353 -- # local d=2 00:03:31.158 10:17:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.158 10:17:05 rpc -- scripts/common.sh@355 -- # echo 2 00:03:31.159 10:17:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.159 10:17:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.159 10:17:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.159 10:17:05 rpc -- scripts/common.sh@368 -- # return 0 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:31.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.159 --rc genhtml_branch_coverage=1 00:03:31.159 --rc genhtml_function_coverage=1 00:03:31.159 --rc genhtml_legend=1 00:03:31.159 --rc geninfo_all_blocks=1 00:03:31.159 --rc geninfo_unexecuted_blocks=1 00:03:31.159 00:03:31.159 ' 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:31.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.159 --rc genhtml_branch_coverage=1 00:03:31.159 --rc genhtml_function_coverage=1 00:03:31.159 --rc genhtml_legend=1 00:03:31.159 --rc geninfo_all_blocks=1 00:03:31.159 --rc geninfo_unexecuted_blocks=1 00:03:31.159 00:03:31.159 ' 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:31.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.159 --rc genhtml_branch_coverage=1 00:03:31.159 --rc genhtml_function_coverage=1 00:03:31.159 --rc genhtml_legend=1 00:03:31.159 --rc geninfo_all_blocks=1 00:03:31.159 --rc geninfo_unexecuted_blocks=1 00:03:31.159 00:03:31.159 ' 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:31.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.159 --rc genhtml_branch_coverage=1 00:03:31.159 --rc genhtml_function_coverage=1 00:03:31.159 --rc genhtml_legend=1 00:03:31.159 --rc geninfo_all_blocks=1 00:03:31.159 --rc geninfo_unexecuted_blocks=1 00:03:31.159 00:03:31.159 ' 00:03:31.159 10:17:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1322154 00:03:31.159 10:17:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:31.159 10:17:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:31.159 10:17:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1322154 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 1322154 ']' 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:31.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:31.159 10:17:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.418 [2024-12-12 10:17:05.228276] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:03:31.418 [2024-12-12 10:17:05.228321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322154 ] 00:03:31.418 [2024-12-12 10:17:05.299352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.418 [2024-12-12 10:17:05.338454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:31.418 [2024-12-12 10:17:05.338494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1322154' to capture a snapshot of events at runtime. 00:03:31.418 [2024-12-12 10:17:05.338502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:31.418 [2024-12-12 10:17:05.338508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:31.418 [2024-12-12 10:17:05.338513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1322154 for offline analysis/debug. 00:03:31.418 [2024-12-12 10:17:05.339006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.677 10:17:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:31.677 10:17:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:31.677 10:17:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:31.677 10:17:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:31.677 10:17:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:31.677 10:17:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:31.677 10:17:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.677 10:17:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.677 10:17:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.677 ************************************ 00:03:31.677 START TEST rpc_integrity 00:03:31.677 ************************************ 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:31.677 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.677 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:31.677 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:31.677 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:31.677 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.677 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:31.677 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.677 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.677 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:31.677 { 00:03:31.677 "name": "Malloc0", 00:03:31.677 "aliases": [ 00:03:31.677 "21166937-4d02-4a9b-bc73-8283acaefa5f" 00:03:31.677 ], 00:03:31.677 "product_name": "Malloc disk", 00:03:31.677 "block_size": 512, 00:03:31.677 "num_blocks": 16384, 00:03:31.677 "uuid": "21166937-4d02-4a9b-bc73-8283acaefa5f", 00:03:31.677 "assigned_rate_limits": { 00:03:31.677 "rw_ios_per_sec": 0, 00:03:31.677 "rw_mbytes_per_sec": 0, 00:03:31.677 "r_mbytes_per_sec": 0, 00:03:31.677 "w_mbytes_per_sec": 0 00:03:31.677 }, 00:03:31.677 "claimed": false, 00:03:31.677 "zoned": false, 00:03:31.677 "supported_io_types": { 00:03:31.677 "read": true, 00:03:31.677 "write": true, 00:03:31.677 "unmap": true, 00:03:31.677 "flush": true, 00:03:31.677 "reset": true, 00:03:31.677 "nvme_admin": false, 00:03:31.677 "nvme_io": false, 00:03:31.677 "nvme_io_md": false, 00:03:31.677 "write_zeroes": true, 00:03:31.677 "zcopy": true, 00:03:31.677 "get_zone_info": false, 00:03:31.677 "zone_management": false, 00:03:31.677 "zone_append": false, 00:03:31.677 "compare": false, 00:03:31.677 "compare_and_write": false, 00:03:31.677 "abort": true, 00:03:31.677 "seek_hole": false, 00:03:31.677 "seek_data": false, 00:03:31.677 "copy": true, 00:03:31.677 "nvme_iov_md": false 00:03:31.677 }, 00:03:31.677 "memory_domains": [ 00:03:31.677 { 00:03:31.677 "dma_device_id": "system", 00:03:31.677 "dma_device_type": 1 00:03:31.677 }, 00:03:31.677 { 00:03:31.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.677 "dma_device_type": 2 00:03:31.677 } 00:03:31.677 ], 00:03:31.677 "driver_specific": {} 00:03:31.677 } 00:03:31.677 ]' 00:03:31.677 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:31.936 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:31.936 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:31.936 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.936 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.936 [2024-12-12 10:17:05.739546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:31.936 [2024-12-12 10:17:05.739582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:31.936 [2024-12-12 10:17:05.739595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d84740 00:03:31.936 [2024-12-12 10:17:05.739601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:31.936 [2024-12-12 10:17:05.740681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:31.937 [2024-12-12 10:17:05.740703] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:31.937 Passthru0 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:31.937 { 00:03:31.937 "name": "Malloc0", 00:03:31.937 "aliases": [ 00:03:31.937 "21166937-4d02-4a9b-bc73-8283acaefa5f" 00:03:31.937 ], 00:03:31.937 "product_name": "Malloc disk", 00:03:31.937 "block_size": 512, 00:03:31.937 "num_blocks": 16384, 00:03:31.937 "uuid": "21166937-4d02-4a9b-bc73-8283acaefa5f", 00:03:31.937 "assigned_rate_limits": { 00:03:31.937 "rw_ios_per_sec": 0, 00:03:31.937 "rw_mbytes_per_sec": 0, 00:03:31.937 "r_mbytes_per_sec": 0, 00:03:31.937 "w_mbytes_per_sec": 0 00:03:31.937 }, 00:03:31.937 "claimed": true, 00:03:31.937 "claim_type": "exclusive_write", 00:03:31.937 "zoned": false, 00:03:31.937 "supported_io_types": { 00:03:31.937 "read": true, 00:03:31.937 "write": true, 00:03:31.937 "unmap": true, 00:03:31.937 "flush": true, 00:03:31.937 "reset": true, 00:03:31.937 "nvme_admin": false, 00:03:31.937 "nvme_io": false, 00:03:31.937 "nvme_io_md": false, 00:03:31.937 "write_zeroes": true, 00:03:31.937 "zcopy": true, 00:03:31.937 "get_zone_info": false, 00:03:31.937 "zone_management": false, 00:03:31.937 "zone_append": false, 00:03:31.937 "compare": false, 00:03:31.937 "compare_and_write": false, 00:03:31.937 "abort": true, 00:03:31.937 "seek_hole": false, 00:03:31.937 "seek_data": false, 00:03:31.937 "copy": true, 00:03:31.937 "nvme_iov_md": false 00:03:31.937 }, 00:03:31.937 "memory_domains": [ 00:03:31.937 { 00:03:31.937 "dma_device_id": "system", 00:03:31.937 "dma_device_type": 1 00:03:31.937 }, 00:03:31.937 { 00:03:31.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.937 "dma_device_type": 2 00:03:31.937 } 00:03:31.937 ], 00:03:31.937 "driver_specific": {} 00:03:31.937 }, 00:03:31.937 { 00:03:31.937 "name": "Passthru0", 00:03:31.937 "aliases": [ 00:03:31.937 "791e6203-37ba-5cd2-a002-cb49fc122f4b" 00:03:31.937 ], 00:03:31.937 "product_name": "passthru", 00:03:31.937 "block_size": 512, 00:03:31.937 "num_blocks": 16384, 00:03:31.937 "uuid": "791e6203-37ba-5cd2-a002-cb49fc122f4b", 00:03:31.937 "assigned_rate_limits": { 00:03:31.937 "rw_ios_per_sec": 0, 00:03:31.937 "rw_mbytes_per_sec": 0, 00:03:31.937 "r_mbytes_per_sec": 0, 00:03:31.937 "w_mbytes_per_sec": 0 00:03:31.937 }, 00:03:31.937 "claimed": false, 00:03:31.937 "zoned": false, 00:03:31.937 "supported_io_types": { 00:03:31.937 "read": true, 00:03:31.937 "write": true, 00:03:31.937 "unmap": true, 00:03:31.937 "flush": true, 00:03:31.937 "reset": true, 00:03:31.937 "nvme_admin": false, 00:03:31.937 "nvme_io": false, 00:03:31.937 "nvme_io_md": false, 00:03:31.937 "write_zeroes": true, 00:03:31.937 "zcopy": true, 00:03:31.937 "get_zone_info": false, 00:03:31.937 "zone_management": false, 00:03:31.937 "zone_append": false, 00:03:31.937 "compare": false, 00:03:31.937 "compare_and_write": false, 00:03:31.937 "abort": true, 00:03:31.937 "seek_hole": false, 00:03:31.937 "seek_data": false, 00:03:31.937 "copy": true, 00:03:31.937 "nvme_iov_md": false 00:03:31.937 }, 00:03:31.937 "memory_domains": [ 00:03:31.937 { 00:03:31.937 "dma_device_id": "system", 00:03:31.937 "dma_device_type": 1 00:03:31.937 }, 00:03:31.937 { 00:03:31.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.937 "dma_device_type": 2 00:03:31.937 } 00:03:31.937 ], 00:03:31.937 "driver_specific": { 00:03:31.937 "passthru": { 00:03:31.937 "name": "Passthru0", 00:03:31.937 "base_bdev_name": "Malloc0" 00:03:31.937 } 00:03:31.937 } 00:03:31.937 } 00:03:31.937 ]' 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:31.937 10:17:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:31.937 00:03:31.937 real 0m0.273s 00:03:31.937 user 0m0.168s 00:03:31.937 sys 0m0.040s 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.937 10:17:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 ************************************ 00:03:31.937 END TEST rpc_integrity 00:03:31.937 ************************************ 00:03:31.937 10:17:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:31.937 10:17:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.937 10:17:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.937 10:17:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 ************************************ 00:03:31.937 START TEST rpc_plugins 00:03:31.937 ************************************ 00:03:31.937 10:17:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:31.937 10:17:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:31.937 10:17:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.937 10:17:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.196 10:17:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.196 10:17:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:32.196 10:17:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:32.196 10:17:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.196 10:17:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.196 10:17:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.196 10:17:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:32.196 { 00:03:32.196 "name": "Malloc1", 00:03:32.196 "aliases": [ 00:03:32.196 "39d16730-19d4-433c-a98d-9204f56c099f" 00:03:32.196 ], 00:03:32.196 "product_name": "Malloc disk", 00:03:32.196 "block_size": 4096, 00:03:32.196 "num_blocks": 256, 00:03:32.196 "uuid": "39d16730-19d4-433c-a98d-9204f56c099f", 00:03:32.196 "assigned_rate_limits": { 00:03:32.196 "rw_ios_per_sec": 0, 00:03:32.196 "rw_mbytes_per_sec": 0, 00:03:32.196 "r_mbytes_per_sec": 0, 00:03:32.196 "w_mbytes_per_sec": 0 00:03:32.196 }, 00:03:32.196 "claimed": false, 00:03:32.196 "zoned": false, 00:03:32.196 "supported_io_types": { 00:03:32.196 "read": true, 00:03:32.196 "write": true, 00:03:32.196 "unmap": true, 00:03:32.196 "flush": true, 00:03:32.196 "reset": true, 00:03:32.196 "nvme_admin": false, 00:03:32.196 "nvme_io": false, 00:03:32.196 "nvme_io_md": false, 00:03:32.196 "write_zeroes": true, 00:03:32.196 "zcopy": true, 00:03:32.196 "get_zone_info": false, 00:03:32.196 "zone_management": false, 00:03:32.196 "zone_append": false, 00:03:32.196 "compare": false, 00:03:32.196 "compare_and_write": false, 00:03:32.196 "abort": true, 00:03:32.196 "seek_hole": false, 00:03:32.196 "seek_data": false, 00:03:32.196 "copy": true, 00:03:32.196 "nvme_iov_md": false 00:03:32.196 }, 00:03:32.196 "memory_domains": [ 00:03:32.196 { 00:03:32.196 "dma_device_id": "system", 00:03:32.196 "dma_device_type": 1 00:03:32.196 }, 00:03:32.196 { 00:03:32.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.196 "dma_device_type": 2 00:03:32.196 } 00:03:32.196 ], 00:03:32.196 "driver_specific": {} 00:03:32.196 } 00:03:32.196 ]' 00:03:32.196 10:17:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:32.196 10:17:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:32.196 10:17:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:32.196 10:17:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.196 10:17:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.196 10:17:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.196 10:17:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:32.196 10:17:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.196 10:17:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.196 10:17:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.196 10:17:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:32.196 10:17:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:32.196 10:17:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:32.196 00:03:32.196 real 0m0.145s 00:03:32.196 user 0m0.084s 00:03:32.196 sys 0m0.021s 00:03:32.196 10:17:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.196 10:17:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.196 ************************************ 00:03:32.196 END TEST rpc_plugins 00:03:32.197 ************************************ 00:03:32.197 10:17:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:32.197 10:17:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.197 10:17:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.197 10:17:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.197 ************************************ 00:03:32.197 START TEST rpc_trace_cmd_test 00:03:32.197 ************************************ 00:03:32.197 10:17:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:32.197 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:32.197 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:32.197 10:17:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.197 10:17:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:32.197 10:17:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.197 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:32.197 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1322154", 00:03:32.197 "tpoint_group_mask": "0x8", 00:03:32.197 "iscsi_conn": { 00:03:32.197 "mask": "0x2", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "scsi": { 00:03:32.197 "mask": "0x4", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "bdev": { 00:03:32.197 "mask": "0x8", 00:03:32.197 "tpoint_mask": "0xffffffffffffffff" 00:03:32.197 }, 00:03:32.197 "nvmf_rdma": { 00:03:32.197 "mask": "0x10", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "nvmf_tcp": { 00:03:32.197 "mask": "0x20", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "ftl": { 00:03:32.197 "mask": "0x40", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "blobfs": { 00:03:32.197 "mask": "0x80", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "dsa": { 00:03:32.197 "mask": "0x200", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "thread": { 00:03:32.197 "mask": "0x400", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "nvme_pcie": { 00:03:32.197 "mask": "0x800", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "iaa": { 00:03:32.197 "mask": "0x1000", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "nvme_tcp": { 00:03:32.197 "mask": "0x2000", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "bdev_nvme": { 00:03:32.197 "mask": "0x4000", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "sock": { 00:03:32.197 "mask": "0x8000", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "blob": { 00:03:32.197 "mask": "0x10000", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "bdev_raid": { 00:03:32.197 "mask": "0x20000", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 }, 00:03:32.197 "scheduler": { 00:03:32.197 "mask": "0x40000", 00:03:32.197 "tpoint_mask": "0x0" 00:03:32.197 } 00:03:32.197 }' 00:03:32.197 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:32.456 00:03:32.456 real 0m0.224s 00:03:32.456 user 0m0.190s 00:03:32.456 sys 0m0.026s 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.456 10:17:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:32.456 ************************************ 00:03:32.456 END TEST rpc_trace_cmd_test 00:03:32.456 ************************************ 00:03:32.456 10:17:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:32.456 10:17:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:32.456 10:17:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:32.456 10:17:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.456 10:17:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.456 10:17:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.456 ************************************ 00:03:32.456 START TEST rpc_daemon_integrity 00:03:32.456 ************************************ 00:03:32.456 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:32.456 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:32.456 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.456 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.456 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.456 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:32.456 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:32.715 { 00:03:32.715 "name": "Malloc2", 00:03:32.715 "aliases": [ 00:03:32.715 "927d5907-f46a-4b88-9c26-a4554e78175a" 00:03:32.715 ], 00:03:32.715 "product_name": "Malloc disk", 00:03:32.715 "block_size": 512, 00:03:32.715 "num_blocks": 16384, 00:03:32.715 "uuid": "927d5907-f46a-4b88-9c26-a4554e78175a", 00:03:32.715 "assigned_rate_limits": { 00:03:32.715 "rw_ios_per_sec": 0, 00:03:32.715 "rw_mbytes_per_sec": 0, 00:03:32.715 "r_mbytes_per_sec": 0, 00:03:32.715 "w_mbytes_per_sec": 0 00:03:32.715 }, 00:03:32.715 "claimed": false, 00:03:32.715 "zoned": false, 00:03:32.715 "supported_io_types": { 00:03:32.715 "read": true, 00:03:32.715 "write": true, 00:03:32.715 "unmap": true, 00:03:32.715 "flush": true, 00:03:32.715 "reset": true, 00:03:32.715 "nvme_admin": false, 00:03:32.715 "nvme_io": false, 00:03:32.715 "nvme_io_md": false, 00:03:32.715 "write_zeroes": true, 00:03:32.715 "zcopy": true, 00:03:32.715 "get_zone_info": false, 00:03:32.715 "zone_management": false, 00:03:32.715 "zone_append": false, 00:03:32.715 "compare": false, 00:03:32.715 "compare_and_write": false, 00:03:32.715 "abort": true, 00:03:32.715 "seek_hole": false, 00:03:32.715 "seek_data": false, 00:03:32.715 "copy": true, 00:03:32.715 "nvme_iov_md": false 00:03:32.715 }, 00:03:32.715 "memory_domains": [ 00:03:32.715 { 00:03:32.715 "dma_device_id": "system", 00:03:32.715 "dma_device_type": 1 00:03:32.715 }, 00:03:32.715 { 00:03:32.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.715 "dma_device_type": 2 00:03:32.715 } 00:03:32.715 ], 00:03:32.715 "driver_specific": {} 00:03:32.715 } 00:03:32.715 ]' 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.715 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.715 [2024-12-12 10:17:06.569788] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:32.715 [2024-12-12 10:17:06.569817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:32.715 [2024-12-12 10:17:06.569828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d51fe0 00:03:32.715 [2024-12-12 10:17:06.569834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:32.716 [2024-12-12 10:17:06.570741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:32.716 [2024-12-12 10:17:06.570763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:32.716 Passthru0 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:32.716 { 00:03:32.716 "name": "Malloc2", 00:03:32.716 "aliases": [ 00:03:32.716 "927d5907-f46a-4b88-9c26-a4554e78175a" 00:03:32.716 ], 00:03:32.716 "product_name": "Malloc disk", 00:03:32.716 "block_size": 512, 00:03:32.716 "num_blocks": 16384, 00:03:32.716 "uuid": "927d5907-f46a-4b88-9c26-a4554e78175a", 00:03:32.716 "assigned_rate_limits": { 00:03:32.716 "rw_ios_per_sec": 0, 00:03:32.716 "rw_mbytes_per_sec": 0, 00:03:32.716 "r_mbytes_per_sec": 0, 00:03:32.716 "w_mbytes_per_sec": 0 00:03:32.716 }, 00:03:32.716 "claimed": true, 00:03:32.716 "claim_type": "exclusive_write", 00:03:32.716 "zoned": false, 00:03:32.716 "supported_io_types": { 00:03:32.716 "read": true, 00:03:32.716 "write": true, 00:03:32.716 "unmap": true, 00:03:32.716 "flush": true, 00:03:32.716 "reset": true, 00:03:32.716 "nvme_admin": false, 00:03:32.716 "nvme_io": false, 00:03:32.716 "nvme_io_md": false, 00:03:32.716 "write_zeroes": true, 00:03:32.716 "zcopy": true, 00:03:32.716 "get_zone_info": false, 00:03:32.716 "zone_management": false, 00:03:32.716 "zone_append": false, 00:03:32.716 "compare": false, 00:03:32.716 "compare_and_write": false, 00:03:32.716 "abort": true, 00:03:32.716 "seek_hole": false, 00:03:32.716 "seek_data": false, 00:03:32.716 "copy": true, 00:03:32.716 "nvme_iov_md": false 00:03:32.716 }, 00:03:32.716 "memory_domains": [ 00:03:32.716 { 00:03:32.716 "dma_device_id": "system", 00:03:32.716 "dma_device_type": 1 00:03:32.716 }, 00:03:32.716 { 00:03:32.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.716 "dma_device_type": 2 00:03:32.716 } 00:03:32.716 ], 00:03:32.716 "driver_specific": {} 00:03:32.716 }, 00:03:32.716 { 00:03:32.716 "name": "Passthru0", 00:03:32.716 "aliases": [ 00:03:32.716 "21f2a07a-25a5-513a-a27f-010057b0ac58" 00:03:32.716 ], 00:03:32.716 "product_name": "passthru", 00:03:32.716 "block_size": 512, 00:03:32.716 "num_blocks": 16384, 00:03:32.716 "uuid": "21f2a07a-25a5-513a-a27f-010057b0ac58", 00:03:32.716 "assigned_rate_limits": { 00:03:32.716 "rw_ios_per_sec": 0, 00:03:32.716 "rw_mbytes_per_sec": 0, 00:03:32.716 "r_mbytes_per_sec": 0, 00:03:32.716 "w_mbytes_per_sec": 0 00:03:32.716 }, 00:03:32.716 "claimed": false, 00:03:32.716 "zoned": false, 00:03:32.716 "supported_io_types": { 00:03:32.716 "read": true, 00:03:32.716 "write": true, 00:03:32.716 "unmap": true, 00:03:32.716 "flush": true, 00:03:32.716 "reset": true, 00:03:32.716 "nvme_admin": false, 00:03:32.716 "nvme_io": false, 00:03:32.716 "nvme_io_md": false, 00:03:32.716 "write_zeroes": true, 00:03:32.716 "zcopy": true, 00:03:32.716 "get_zone_info": false, 00:03:32.716 "zone_management": false, 00:03:32.716 "zone_append": false, 00:03:32.716 "compare": false, 00:03:32.716 "compare_and_write": false, 00:03:32.716 "abort": true, 00:03:32.716 "seek_hole": false, 00:03:32.716 "seek_data": false, 00:03:32.716 "copy": true, 00:03:32.716 "nvme_iov_md": false 00:03:32.716 }, 00:03:32.716 "memory_domains": [ 00:03:32.716 { 00:03:32.716 "dma_device_id": "system", 00:03:32.716 "dma_device_type": 1 00:03:32.716 }, 00:03:32.716 { 00:03:32.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.716 "dma_device_type": 2 00:03:32.716 } 00:03:32.716 ], 00:03:32.716 "driver_specific": { 00:03:32.716 "passthru": { 00:03:32.716 "name": "Passthru0", 00:03:32.716 "base_bdev_name": "Malloc2" 00:03:32.716 } 00:03:32.716 } 00:03:32.716 } 00:03:32.716 ]' 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:32.716 00:03:32.716 real 0m0.272s 00:03:32.716 user 0m0.178s 00:03:32.716 sys 0m0.030s 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.716 10:17:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.716 ************************************ 00:03:32.716 END TEST rpc_daemon_integrity 00:03:32.716 ************************************ 00:03:32.975 10:17:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:32.975 10:17:06 rpc -- rpc/rpc.sh@84 -- # killprocess 1322154 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 1322154 ']' 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@958 -- # kill -0 1322154 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@959 -- # uname 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1322154 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1322154' 00:03:32.975 killing process with pid 1322154 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@973 -- # kill 1322154 00:03:32.975 10:17:06 rpc -- common/autotest_common.sh@978 -- # wait 1322154 00:03:33.235 00:03:33.235 real 0m2.094s 00:03:33.235 user 0m2.656s 00:03:33.235 sys 0m0.681s 00:03:33.235 10:17:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.235 10:17:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.235 ************************************ 00:03:33.235 END TEST rpc 00:03:33.235 ************************************ 00:03:33.235 10:17:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:33.235 10:17:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.235 10:17:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.235 10:17:07 -- common/autotest_common.sh@10 -- # set +x 00:03:33.235 ************************************ 00:03:33.235 START TEST skip_rpc 00:03:33.235 ************************************ 00:03:33.235 10:17:07 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:33.235 * Looking for test storage... 00:03:33.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.235 10:17:07 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.494 10:17:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:33.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.494 --rc genhtml_branch_coverage=1 00:03:33.494 --rc genhtml_function_coverage=1 00:03:33.494 --rc genhtml_legend=1 00:03:33.494 --rc geninfo_all_blocks=1 00:03:33.494 --rc geninfo_unexecuted_blocks=1 00:03:33.494 00:03:33.494 ' 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:33.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.494 --rc genhtml_branch_coverage=1 00:03:33.494 --rc genhtml_function_coverage=1 00:03:33.494 --rc genhtml_legend=1 00:03:33.494 --rc geninfo_all_blocks=1 00:03:33.494 --rc geninfo_unexecuted_blocks=1 00:03:33.494 00:03:33.494 ' 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:33.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.494 --rc genhtml_branch_coverage=1 00:03:33.494 --rc genhtml_function_coverage=1 00:03:33.494 --rc genhtml_legend=1 00:03:33.494 --rc geninfo_all_blocks=1 00:03:33.494 --rc geninfo_unexecuted_blocks=1 00:03:33.494 00:03:33.494 ' 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:33.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.494 --rc genhtml_branch_coverage=1 00:03:33.494 --rc genhtml_function_coverage=1 00:03:33.494 --rc genhtml_legend=1 00:03:33.494 --rc geninfo_all_blocks=1 00:03:33.494 --rc geninfo_unexecuted_blocks=1 00:03:33.494 00:03:33.494 ' 00:03:33.494 10:17:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:33.494 10:17:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:33.494 10:17:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.494 10:17:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.494 ************************************ 00:03:33.494 START TEST skip_rpc 00:03:33.494 ************************************ 00:03:33.494 10:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:33.494 10:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1322777 00:03:33.494 10:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:33.494 10:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:33.494 10:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:33.494 [2024-12-12 10:17:07.430125] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:03:33.494 [2024-12-12 10:17:07.430164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322777 ] 00:03:33.495 [2024-12-12 10:17:07.502198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.753 [2024-12-12 10:17:07.542128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1322777 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1322777 ']' 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1322777 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1322777 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1322777' 00:03:39.024 killing process with pid 1322777 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1322777 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1322777 00:03:39.024 00:03:39.024 real 0m5.362s 00:03:39.024 user 0m5.133s 00:03:39.024 sys 0m0.267s 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.024 10:17:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.024 ************************************ 00:03:39.024 END TEST skip_rpc 00:03:39.025 ************************************ 00:03:39.025 10:17:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:39.025 10:17:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.025 10:17:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.025 10:17:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.025 ************************************ 00:03:39.025 START TEST skip_rpc_with_json 00:03:39.025 ************************************ 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1323698 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1323698 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1323698 ']' 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:39.025 10:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.025 [2024-12-12 10:17:12.860425] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:03:39.025 [2024-12-12 10:17:12.860468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323698 ] 00:03:39.025 [2024-12-12 10:17:12.936374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.025 [2024-12-12 10:17:12.977701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.283 [2024-12-12 10:17:13.195391] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:39.283 request: 00:03:39.283 { 00:03:39.283 "trtype": "tcp", 00:03:39.283 "method": "nvmf_get_transports", 00:03:39.283 "req_id": 1 00:03:39.283 } 00:03:39.283 Got JSON-RPC error response 00:03:39.283 response: 00:03:39.283 { 00:03:39.283 "code": -19, 00:03:39.283 "message": "No such device" 00:03:39.283 } 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.283 [2024-12-12 10:17:13.207502] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.283 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.542 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.542 10:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.542 { 00:03:39.542 "subsystems": [ 00:03:39.542 { 00:03:39.542 "subsystem": "fsdev", 00:03:39.542 "config": [ 00:03:39.542 { 00:03:39.542 "method": "fsdev_set_opts", 00:03:39.542 "params": { 00:03:39.542 "fsdev_io_pool_size": 65535, 00:03:39.542 "fsdev_io_cache_size": 256 00:03:39.542 } 00:03:39.542 } 00:03:39.542 ] 00:03:39.542 }, 00:03:39.542 { 00:03:39.542 "subsystem": "vfio_user_target", 00:03:39.542 "config": null 00:03:39.542 }, 00:03:39.542 { 00:03:39.542 "subsystem": "keyring", 00:03:39.542 "config": [] 00:03:39.542 }, 00:03:39.542 { 00:03:39.542 "subsystem": "iobuf", 00:03:39.542 "config": [ 00:03:39.542 { 00:03:39.542 "method": "iobuf_set_options", 00:03:39.542 "params": { 00:03:39.542 "small_pool_count": 8192, 00:03:39.542 "large_pool_count": 1024, 00:03:39.542 "small_bufsize": 8192, 00:03:39.542 "large_bufsize": 135168, 00:03:39.542 "enable_numa": false 00:03:39.542 } 00:03:39.542 } 00:03:39.542 ] 00:03:39.542 }, 00:03:39.542 { 00:03:39.542 "subsystem": "sock", 00:03:39.542 "config": [ 00:03:39.542 { 00:03:39.542 "method": "sock_set_default_impl", 00:03:39.542 "params": { 00:03:39.542 "impl_name": "posix" 00:03:39.542 } 00:03:39.542 }, 00:03:39.542 { 00:03:39.542 "method": "sock_impl_set_options", 00:03:39.542 "params": { 00:03:39.542 "impl_name": "ssl", 00:03:39.542 "recv_buf_size": 4096, 00:03:39.542 "send_buf_size": 4096, 00:03:39.542 "enable_recv_pipe": true, 00:03:39.542 "enable_quickack": false, 00:03:39.542 "enable_placement_id": 0, 00:03:39.542 "enable_zerocopy_send_server": true, 00:03:39.542 "enable_zerocopy_send_client": false, 00:03:39.542 "zerocopy_threshold": 0, 00:03:39.542 "tls_version": 0, 00:03:39.542 "enable_ktls": false 00:03:39.542 } 00:03:39.542 }, 00:03:39.542 { 00:03:39.542 "method": "sock_impl_set_options", 00:03:39.542 "params": { 00:03:39.542 "impl_name": "posix", 00:03:39.542 "recv_buf_size": 2097152, 00:03:39.543 "send_buf_size": 2097152, 00:03:39.543 "enable_recv_pipe": true, 00:03:39.543 "enable_quickack": false, 00:03:39.543 "enable_placement_id": 0, 00:03:39.543 "enable_zerocopy_send_server": true, 00:03:39.543 "enable_zerocopy_send_client": false, 00:03:39.543 "zerocopy_threshold": 0, 00:03:39.543 "tls_version": 0, 00:03:39.543 "enable_ktls": false 00:03:39.543 } 00:03:39.543 } 00:03:39.543 ] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "vmd", 00:03:39.543 "config": [] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "accel", 00:03:39.543 "config": [ 00:03:39.543 { 00:03:39.543 "method": "accel_set_options", 00:03:39.543 "params": { 00:03:39.543 "small_cache_size": 128, 00:03:39.543 "large_cache_size": 16, 00:03:39.543 "task_count": 2048, 00:03:39.543 "sequence_count": 2048, 00:03:39.543 "buf_count": 2048 00:03:39.543 } 00:03:39.543 } 00:03:39.543 ] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "bdev", 00:03:39.543 "config": [ 00:03:39.543 { 00:03:39.543 "method": "bdev_set_options", 00:03:39.543 "params": { 00:03:39.543 "bdev_io_pool_size": 65535, 00:03:39.543 "bdev_io_cache_size": 256, 00:03:39.543 "bdev_auto_examine": true, 00:03:39.543 "iobuf_small_cache_size": 128, 00:03:39.543 "iobuf_large_cache_size": 16 00:03:39.543 } 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "method": "bdev_raid_set_options", 00:03:39.543 "params": { 00:03:39.543 "process_window_size_kb": 1024, 00:03:39.543 "process_max_bandwidth_mb_sec": 0 00:03:39.543 } 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "method": "bdev_iscsi_set_options", 00:03:39.543 "params": { 00:03:39.543 "timeout_sec": 30 00:03:39.543 } 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "method": "bdev_nvme_set_options", 00:03:39.543 "params": { 00:03:39.543 "action_on_timeout": "none", 00:03:39.543 "timeout_us": 0, 00:03:39.543 "timeout_admin_us": 0, 00:03:39.543 "keep_alive_timeout_ms": 10000, 00:03:39.543 "arbitration_burst": 0, 00:03:39.543 "low_priority_weight": 0, 00:03:39.543 "medium_priority_weight": 0, 00:03:39.543 "high_priority_weight": 0, 00:03:39.543 "nvme_adminq_poll_period_us": 10000, 00:03:39.543 "nvme_ioq_poll_period_us": 0, 00:03:39.543 "io_queue_requests": 0, 00:03:39.543 "delay_cmd_submit": true, 00:03:39.543 "transport_retry_count": 4, 00:03:39.543 "bdev_retry_count": 3, 00:03:39.543 "transport_ack_timeout": 0, 00:03:39.543 "ctrlr_loss_timeout_sec": 0, 00:03:39.543 "reconnect_delay_sec": 0, 00:03:39.543 "fast_io_fail_timeout_sec": 0, 00:03:39.543 "disable_auto_failback": false, 00:03:39.543 "generate_uuids": false, 00:03:39.543 "transport_tos": 0, 00:03:39.543 "nvme_error_stat": false, 00:03:39.543 "rdma_srq_size": 0, 00:03:39.543 "io_path_stat": false, 00:03:39.543 "allow_accel_sequence": false, 00:03:39.543 "rdma_max_cq_size": 0, 00:03:39.543 "rdma_cm_event_timeout_ms": 0, 00:03:39.543 "dhchap_digests": [ 00:03:39.543 "sha256", 00:03:39.543 "sha384", 00:03:39.543 "sha512" 00:03:39.543 ], 00:03:39.543 "dhchap_dhgroups": [ 00:03:39.543 "null", 00:03:39.543 "ffdhe2048", 00:03:39.543 "ffdhe3072", 00:03:39.543 "ffdhe4096", 00:03:39.543 "ffdhe6144", 00:03:39.543 "ffdhe8192" 00:03:39.543 ], 00:03:39.543 "rdma_umr_per_io": false 00:03:39.543 } 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "method": "bdev_nvme_set_hotplug", 00:03:39.543 "params": { 00:03:39.543 "period_us": 100000, 00:03:39.543 "enable": false 00:03:39.543 } 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "method": "bdev_wait_for_examine" 00:03:39.543 } 00:03:39.543 ] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "scsi", 00:03:39.543 "config": null 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "scheduler", 00:03:39.543 "config": [ 00:03:39.543 { 00:03:39.543 "method": "framework_set_scheduler", 00:03:39.543 "params": { 00:03:39.543 "name": "static" 00:03:39.543 } 00:03:39.543 } 00:03:39.543 ] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "vhost_scsi", 00:03:39.543 "config": [] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "vhost_blk", 00:03:39.543 "config": [] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "ublk", 00:03:39.543 "config": [] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "nbd", 00:03:39.543 "config": [] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "nvmf", 00:03:39.543 "config": [ 00:03:39.543 { 00:03:39.543 "method": "nvmf_set_config", 00:03:39.543 "params": { 00:03:39.543 "discovery_filter": "match_any", 00:03:39.543 "admin_cmd_passthru": { 00:03:39.543 "identify_ctrlr": false 00:03:39.543 }, 00:03:39.543 "dhchap_digests": [ 00:03:39.543 "sha256", 00:03:39.543 "sha384", 00:03:39.543 "sha512" 00:03:39.543 ], 00:03:39.543 "dhchap_dhgroups": [ 00:03:39.543 "null", 00:03:39.543 "ffdhe2048", 00:03:39.543 "ffdhe3072", 00:03:39.543 "ffdhe4096", 00:03:39.543 "ffdhe6144", 00:03:39.543 "ffdhe8192" 00:03:39.543 ] 00:03:39.543 } 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "method": "nvmf_set_max_subsystems", 00:03:39.543 "params": { 00:03:39.543 "max_subsystems": 1024 00:03:39.543 } 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "method": "nvmf_set_crdt", 00:03:39.543 "params": { 00:03:39.543 "crdt1": 0, 00:03:39.543 "crdt2": 0, 00:03:39.543 "crdt3": 0 00:03:39.543 } 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "method": "nvmf_create_transport", 00:03:39.543 "params": { 00:03:39.543 "trtype": "TCP", 00:03:39.543 "max_queue_depth": 128, 00:03:39.543 "max_io_qpairs_per_ctrlr": 127, 00:03:39.543 "in_capsule_data_size": 4096, 00:03:39.543 "max_io_size": 131072, 00:03:39.543 "io_unit_size": 131072, 00:03:39.543 "max_aq_depth": 128, 00:03:39.543 "num_shared_buffers": 511, 00:03:39.543 "buf_cache_size": 4294967295, 00:03:39.543 "dif_insert_or_strip": false, 00:03:39.543 "zcopy": false, 00:03:39.543 "c2h_success": true, 00:03:39.543 "sock_priority": 0, 00:03:39.543 "abort_timeout_sec": 1, 00:03:39.543 "ack_timeout": 0, 00:03:39.543 "data_wr_pool_size": 0 00:03:39.543 } 00:03:39.543 } 00:03:39.543 ] 00:03:39.543 }, 00:03:39.543 { 00:03:39.543 "subsystem": "iscsi", 00:03:39.544 "config": [ 00:03:39.544 { 00:03:39.544 "method": "iscsi_set_options", 00:03:39.544 "params": { 00:03:39.544 "node_base": "iqn.2016-06.io.spdk", 00:03:39.544 "max_sessions": 128, 00:03:39.544 "max_connections_per_session": 2, 00:03:39.544 "max_queue_depth": 64, 00:03:39.544 "default_time2wait": 2, 00:03:39.544 "default_time2retain": 20, 00:03:39.544 "first_burst_length": 8192, 00:03:39.544 "immediate_data": true, 00:03:39.544 "allow_duplicated_isid": false, 00:03:39.544 "error_recovery_level": 0, 00:03:39.544 "nop_timeout": 60, 00:03:39.544 "nop_in_interval": 30, 00:03:39.544 "disable_chap": false, 00:03:39.544 "require_chap": false, 00:03:39.544 "mutual_chap": false, 00:03:39.544 "chap_group": 0, 00:03:39.544 "max_large_datain_per_connection": 64, 00:03:39.544 "max_r2t_per_connection": 4, 00:03:39.544 "pdu_pool_size": 36864, 00:03:39.544 "immediate_data_pool_size": 16384, 00:03:39.544 "data_out_pool_size": 2048 00:03:39.544 } 00:03:39.544 } 00:03:39.544 ] 00:03:39.544 } 00:03:39.544 ] 00:03:39.544 } 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1323698 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1323698 ']' 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1323698 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1323698 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1323698' 00:03:39.544 killing process with pid 1323698 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1323698 00:03:39.544 10:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1323698 00:03:39.802 10:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1323873 00:03:39.802 10:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.802 10:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1323873 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1323873 ']' 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1323873 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1323873 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1323873' 00:03:45.068 killing process with pid 1323873 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1323873 00:03:45.068 10:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1323873 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:45.327 00:03:45.327 real 0m6.288s 00:03:45.327 user 0m6.011s 00:03:45.327 sys 0m0.589s 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.327 ************************************ 00:03:45.327 END TEST skip_rpc_with_json 00:03:45.327 ************************************ 00:03:45.327 10:17:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:45.327 10:17:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.327 10:17:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.327 10:17:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.327 ************************************ 00:03:45.327 START TEST skip_rpc_with_delay 00:03:45.327 ************************************ 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.327 [2024-12-12 10:17:19.227094] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:45.327 00:03:45.327 real 0m0.069s 00:03:45.327 user 0m0.043s 00:03:45.327 sys 0m0.025s 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.327 10:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:45.327 ************************************ 00:03:45.327 END TEST skip_rpc_with_delay 00:03:45.327 ************************************ 00:03:45.327 10:17:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:45.327 10:17:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:45.327 10:17:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:45.327 10:17:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.327 10:17:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.327 10:17:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.327 ************************************ 00:03:45.327 START TEST exit_on_failed_rpc_init 00:03:45.327 ************************************ 00:03:45.327 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:45.327 10:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1324874 00:03:45.328 10:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1324874 00:03:45.328 10:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:45.328 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1324874 ']' 00:03:45.328 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.328 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:45.328 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.328 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:45.328 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:45.587 [2024-12-12 10:17:19.361829] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:03:45.587 [2024-12-12 10:17:19.361871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324874 ] 00:03:45.587 [2024-12-12 10:17:19.436680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.587 [2024-12-12 10:17:19.478548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:45.846 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:45.846 [2024-12-12 10:17:19.744754] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:03:45.846 [2024-12-12 10:17:19.744801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324890 ] 00:03:45.846 [2024-12-12 10:17:19.818023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.846 [2024-12-12 10:17:19.857039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:03:45.846 [2024-12-12 10:17:19.857094] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:45.846 [2024-12-12 10:17:19.857103] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:45.846 [2024-12-12 10:17:19.857109] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1324874 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1324874 ']' 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1324874 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1324874 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1324874' 00:03:46.105 killing process with pid 1324874 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1324874 00:03:46.105 10:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1324874 00:03:46.364 00:03:46.364 real 0m0.940s 00:03:46.364 user 0m0.998s 00:03:46.364 sys 0m0.383s 00:03:46.364 10:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.364 10:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.364 ************************************ 00:03:46.364 END TEST exit_on_failed_rpc_init 00:03:46.364 ************************************ 00:03:46.364 10:17:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.364 00:03:46.364 real 0m13.127s 00:03:46.364 user 0m12.402s 00:03:46.364 sys 0m1.547s 00:03:46.364 10:17:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.364 10:17:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.364 ************************************ 00:03:46.364 END TEST skip_rpc 00:03:46.364 ************************************ 00:03:46.364 10:17:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:46.364 10:17:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.364 10:17:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.364 10:17:20 -- common/autotest_common.sh@10 -- # set +x 00:03:46.364 ************************************ 00:03:46.364 START TEST rpc_client 00:03:46.364 ************************************ 00:03:46.364 10:17:20 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:46.623 * Looking for test storage... 00:03:46.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:46.623 10:17:20 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:46.623 10:17:20 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:03:46.623 10:17:20 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:46.623 10:17:20 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.623 10:17:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:46.623 10:17:20 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.623 10:17:20 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:46.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.623 --rc genhtml_branch_coverage=1 00:03:46.623 --rc genhtml_function_coverage=1 00:03:46.623 --rc genhtml_legend=1 00:03:46.623 --rc geninfo_all_blocks=1 00:03:46.623 --rc geninfo_unexecuted_blocks=1 00:03:46.623 00:03:46.623 ' 00:03:46.623 10:17:20 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:46.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.623 --rc genhtml_branch_coverage=1 00:03:46.623 --rc genhtml_function_coverage=1 00:03:46.623 --rc genhtml_legend=1 00:03:46.623 --rc geninfo_all_blocks=1 00:03:46.623 --rc geninfo_unexecuted_blocks=1 00:03:46.623 00:03:46.623 ' 00:03:46.623 10:17:20 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:46.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.623 --rc genhtml_branch_coverage=1 00:03:46.624 --rc genhtml_function_coverage=1 00:03:46.624 --rc genhtml_legend=1 00:03:46.624 --rc geninfo_all_blocks=1 00:03:46.624 --rc geninfo_unexecuted_blocks=1 00:03:46.624 00:03:46.624 ' 00:03:46.624 10:17:20 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:46.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.624 --rc genhtml_branch_coverage=1 00:03:46.624 --rc genhtml_function_coverage=1 00:03:46.624 --rc genhtml_legend=1 00:03:46.624 --rc geninfo_all_blocks=1 00:03:46.624 --rc geninfo_unexecuted_blocks=1 00:03:46.624 00:03:46.624 ' 00:03:46.624 10:17:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:46.624 OK 00:03:46.624 10:17:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:46.624 00:03:46.624 real 0m0.201s 00:03:46.624 user 0m0.114s 00:03:46.624 sys 0m0.100s 00:03:46.624 10:17:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.624 10:17:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:46.624 ************************************ 00:03:46.624 END TEST rpc_client 00:03:46.624 ************************************ 00:03:46.624 10:17:20 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:46.624 10:17:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.624 10:17:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.624 10:17:20 -- common/autotest_common.sh@10 -- # set +x 00:03:46.624 ************************************ 00:03:46.624 START TEST json_config 00:03:46.624 ************************************ 00:03:46.624 10:17:20 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:46.922 10:17:20 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:46.922 10:17:20 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:03:46.922 10:17:20 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:46.922 10:17:20 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:46.922 10:17:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.922 10:17:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.922 10:17:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.922 10:17:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.922 10:17:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.922 10:17:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.922 10:17:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.922 10:17:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.922 10:17:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.922 10:17:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.922 10:17:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.922 10:17:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:46.922 10:17:20 json_config -- scripts/common.sh@345 -- # : 1 00:03:46.922 10:17:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.922 10:17:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.922 10:17:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:46.922 10:17:20 json_config -- scripts/common.sh@353 -- # local d=1 00:03:46.922 10:17:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.922 10:17:20 json_config -- scripts/common.sh@355 -- # echo 1 00:03:46.922 10:17:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.922 10:17:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:46.922 10:17:20 json_config -- scripts/common.sh@353 -- # local d=2 00:03:46.922 10:17:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.922 10:17:20 json_config -- scripts/common.sh@355 -- # echo 2 00:03:46.922 10:17:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.922 10:17:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.922 10:17:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.922 10:17:20 json_config -- scripts/common.sh@368 -- # return 0 00:03:46.922 10:17:20 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.922 10:17:20 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:46.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.922 --rc genhtml_branch_coverage=1 00:03:46.922 --rc genhtml_function_coverage=1 00:03:46.922 --rc genhtml_legend=1 00:03:46.922 --rc geninfo_all_blocks=1 00:03:46.922 --rc geninfo_unexecuted_blocks=1 00:03:46.922 00:03:46.922 ' 00:03:46.922 10:17:20 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:46.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.922 --rc genhtml_branch_coverage=1 00:03:46.922 --rc genhtml_function_coverage=1 00:03:46.922 --rc genhtml_legend=1 00:03:46.922 --rc geninfo_all_blocks=1 00:03:46.922 --rc geninfo_unexecuted_blocks=1 00:03:46.922 00:03:46.922 ' 00:03:46.922 10:17:20 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:46.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.922 --rc genhtml_branch_coverage=1 00:03:46.922 --rc genhtml_function_coverage=1 00:03:46.922 --rc genhtml_legend=1 00:03:46.922 --rc geninfo_all_blocks=1 00:03:46.922 --rc geninfo_unexecuted_blocks=1 00:03:46.922 00:03:46.922 ' 00:03:46.922 10:17:20 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:46.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.922 --rc genhtml_branch_coverage=1 00:03:46.922 --rc genhtml_function_coverage=1 00:03:46.922 --rc genhtml_legend=1 00:03:46.922 --rc geninfo_all_blocks=1 00:03:46.922 --rc geninfo_unexecuted_blocks=1 00:03:46.922 00:03:46.922 ' 00:03:46.922 10:17:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.922 10:17:20 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:46.922 10:17:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:46.922 10:17:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.922 10:17:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.922 10:17:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.922 10:17:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.922 10:17:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.922 10:17:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.923 10:17:20 json_config -- paths/export.sh@5 -- # export PATH 00:03:46.923 10:17:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@51 -- # : 0 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:46.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:46.923 10:17:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:46.923 INFO: JSON configuration test init 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.923 10:17:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:46.923 10:17:20 json_config -- json_config/common.sh@9 -- # local app=target 00:03:46.923 10:17:20 json_config -- json_config/common.sh@10 -- # shift 00:03:46.923 10:17:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:46.923 10:17:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:46.923 10:17:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:46.923 10:17:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:46.923 10:17:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:46.923 10:17:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1325236 00:03:46.923 10:17:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:46.923 Waiting for target to run... 00:03:46.923 10:17:20 json_config -- json_config/common.sh@25 -- # waitforlisten 1325236 /var/tmp/spdk_tgt.sock 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 1325236 ']' 00:03:46.923 10:17:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:46.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:46.923 10:17:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.923 [2024-12-12 10:17:20.873881] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:03:46.923 [2024-12-12 10:17:20.873927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325236 ] 00:03:47.510 [2024-12-12 10:17:21.331283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.510 [2024-12-12 10:17:21.389787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.768 10:17:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.768 10:17:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:47.768 10:17:21 json_config -- json_config/common.sh@26 -- # echo '' 00:03:47.768 00:03:47.768 10:17:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:47.768 10:17:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:47.768 10:17:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.768 10:17:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.768 10:17:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:47.768 10:17:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:47.768 10:17:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.768 10:17:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.768 10:17:21 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:47.768 10:17:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:47.768 10:17:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:51.055 10:17:24 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:51.055 10:17:24 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:51.055 10:17:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.055 10:17:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.055 10:17:24 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:51.055 10:17:24 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:51.055 10:17:24 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:51.055 10:17:24 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:51.055 10:17:24 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:51.055 10:17:24 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:51.055 10:17:24 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:51.055 10:17:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@54 -- # sort 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:51.055 10:17:25 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:51.055 10:17:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.055 10:17:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:51.312 10:17:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.312 10:17:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.312 10:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.312 MallocForNvmf0 00:03:51.312 10:17:25 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.312 10:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.568 MallocForNvmf1 00:03:51.568 10:17:25 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:51.568 10:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:51.826 [2024-12-12 10:17:25.676105] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:51.826 10:17:25 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:51.826 10:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.084 10:17:25 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.084 10:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.342 10:17:26 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.342 10:17:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.342 10:17:26 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.342 10:17:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.601 [2024-12-12 10:17:26.466494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:52.601 10:17:26 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:52.601 10:17:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.601 10:17:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.601 10:17:26 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:52.601 10:17:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.601 10:17:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.601 10:17:26 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:52.601 10:17:26 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:52.601 10:17:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:52.860 MallocBdevForConfigChangeCheck 00:03:52.860 10:17:26 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:52.860 10:17:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.860 10:17:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.860 10:17:26 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:52.860 10:17:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.119 10:17:27 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:53.119 INFO: shutting down applications... 00:03:53.119 10:17:27 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:53.119 10:17:27 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:53.119 10:17:27 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:53.119 10:17:27 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:55.022 Calling clear_iscsi_subsystem 00:03:55.022 Calling clear_nvmf_subsystem 00:03:55.022 Calling clear_nbd_subsystem 00:03:55.022 Calling clear_ublk_subsystem 00:03:55.022 Calling clear_vhost_blk_subsystem 00:03:55.022 Calling clear_vhost_scsi_subsystem 00:03:55.022 Calling clear_bdev_subsystem 00:03:55.022 10:17:28 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:55.022 10:17:28 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:55.022 10:17:28 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:55.022 10:17:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.022 10:17:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:55.022 10:17:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:55.281 10:17:29 json_config -- json_config/json_config.sh@352 -- # break 00:03:55.281 10:17:29 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:55.281 10:17:29 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:55.281 10:17:29 json_config -- json_config/common.sh@31 -- # local app=target 00:03:55.281 10:17:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:55.281 10:17:29 json_config -- json_config/common.sh@35 -- # [[ -n 1325236 ]] 00:03:55.281 10:17:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1325236 00:03:55.281 10:17:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:55.281 10:17:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.281 10:17:29 json_config -- json_config/common.sh@41 -- # kill -0 1325236 00:03:55.281 10:17:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:55.849 10:17:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:55.849 10:17:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.849 10:17:29 json_config -- json_config/common.sh@41 -- # kill -0 1325236 00:03:55.849 10:17:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:55.849 10:17:29 json_config -- json_config/common.sh@43 -- # break 00:03:55.849 10:17:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:55.849 10:17:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:55.849 SPDK target shutdown done 00:03:55.849 10:17:29 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:55.849 INFO: relaunching applications... 00:03:55.849 10:17:29 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.849 10:17:29 json_config -- json_config/common.sh@9 -- # local app=target 00:03:55.849 10:17:29 json_config -- json_config/common.sh@10 -- # shift 00:03:55.849 10:17:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:55.849 10:17:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:55.849 10:17:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:55.849 10:17:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.850 10:17:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.850 10:17:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1326720 00:03:55.850 10:17:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:55.850 Waiting for target to run... 00:03:55.850 10:17:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.850 10:17:29 json_config -- json_config/common.sh@25 -- # waitforlisten 1326720 /var/tmp/spdk_tgt.sock 00:03:55.850 10:17:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 1326720 ']' 00:03:55.850 10:17:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:55.850 10:17:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.850 10:17:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:55.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:55.850 10:17:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.850 10:17:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.850 [2024-12-12 10:17:29.623018] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:03:55.850 [2024-12-12 10:17:29.623072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326720 ] 00:03:56.109 [2024-12-12 10:17:29.915864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.109 [2024-12-12 10:17:29.950652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.394 [2024-12-12 10:17:32.980483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.394 [2024-12-12 10:17:33.012786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:59.394 10:17:33 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.394 10:17:33 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:59.394 10:17:33 json_config -- json_config/common.sh@26 -- # echo '' 00:03:59.394 00:03:59.394 10:17:33 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:59.394 10:17:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:59.394 INFO: Checking if target configuration is the same... 00:03:59.394 10:17:33 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:59.394 10:17:33 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.394 10:17:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:59.394 + '[' 2 -ne 2 ']' 00:03:59.394 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:59.394 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:59.394 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:59.394 +++ basename /dev/fd/62 00:03:59.394 ++ mktemp /tmp/62.XXX 00:03:59.394 + tmp_file_1=/tmp/62.9lz 00:03:59.394 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.394 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:59.394 + tmp_file_2=/tmp/spdk_tgt_config.json.M7H 00:03:59.394 + ret=0 00:03:59.394 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.394 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.652 + diff -u /tmp/62.9lz /tmp/spdk_tgt_config.json.M7H 00:03:59.652 + echo 'INFO: JSON config files are the same' 00:03:59.652 INFO: JSON config files are the same 00:03:59.652 + rm /tmp/62.9lz /tmp/spdk_tgt_config.json.M7H 00:03:59.652 + exit 0 00:03:59.652 10:17:33 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:59.652 10:17:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:59.652 INFO: changing configuration and checking if this can be detected... 00:03:59.652 10:17:33 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:59.652 10:17:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:59.652 10:17:33 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.652 10:17:33 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:59.652 10:17:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:59.652 + '[' 2 -ne 2 ']' 00:03:59.652 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:59.652 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:59.653 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:59.653 +++ basename /dev/fd/62 00:03:59.653 ++ mktemp /tmp/62.XXX 00:03:59.653 + tmp_file_1=/tmp/62.CEM 00:03:59.653 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:59.653 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:59.653 + tmp_file_2=/tmp/spdk_tgt_config.json.SgK 00:03:59.653 + ret=0 00:03:59.653 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.219 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.219 + diff -u /tmp/62.CEM /tmp/spdk_tgt_config.json.SgK 00:04:00.219 + ret=1 00:04:00.219 + echo '=== Start of file: /tmp/62.CEM ===' 00:04:00.219 + cat /tmp/62.CEM 00:04:00.219 + echo '=== End of file: /tmp/62.CEM ===' 00:04:00.219 + echo '' 00:04:00.219 + echo '=== Start of file: /tmp/spdk_tgt_config.json.SgK ===' 00:04:00.219 + cat /tmp/spdk_tgt_config.json.SgK 00:04:00.219 + echo '=== End of file: /tmp/spdk_tgt_config.json.SgK ===' 00:04:00.219 + echo '' 00:04:00.219 + rm /tmp/62.CEM /tmp/spdk_tgt_config.json.SgK 00:04:00.219 + exit 1 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:00.219 INFO: configuration change detected. 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@324 -- # [[ -n 1326720 ]] 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.219 10:17:34 json_config -- json_config/json_config.sh@330 -- # killprocess 1326720 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@954 -- # '[' -z 1326720 ']' 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@958 -- # kill -0 1326720 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@959 -- # uname 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1326720 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1326720' 00:04:00.219 killing process with pid 1326720 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@973 -- # kill 1326720 00:04:00.219 10:17:34 json_config -- common/autotest_common.sh@978 -- # wait 1326720 00:04:01.595 10:17:35 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.595 10:17:35 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:01.595 10:17:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.595 10:17:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.856 10:17:35 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:01.856 10:17:35 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:01.856 INFO: Success 00:04:01.856 00:04:01.856 real 0m15.023s 00:04:01.856 user 0m15.511s 00:04:01.856 sys 0m2.502s 00:04:01.856 10:17:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.856 10:17:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.856 ************************************ 00:04:01.856 END TEST json_config 00:04:01.856 ************************************ 00:04:01.856 10:17:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:01.856 10:17:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.856 10:17:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.856 10:17:35 -- common/autotest_common.sh@10 -- # set +x 00:04:01.856 ************************************ 00:04:01.856 START TEST json_config_extra_key 00:04:01.856 ************************************ 00:04:01.856 10:17:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:01.856 10:17:35 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:01.856 10:17:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:01.856 10:17:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:01.856 10:17:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:01.856 10:17:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.857 10:17:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:01.857 10:17:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.857 10:17:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.857 10:17:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.857 10:17:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:01.857 10:17:35 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.857 10:17:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:01.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.857 --rc genhtml_branch_coverage=1 00:04:01.857 --rc genhtml_function_coverage=1 00:04:01.857 --rc genhtml_legend=1 00:04:01.857 --rc geninfo_all_blocks=1 00:04:01.857 --rc geninfo_unexecuted_blocks=1 00:04:01.857 00:04:01.857 ' 00:04:01.857 10:17:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:01.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.857 --rc genhtml_branch_coverage=1 00:04:01.857 --rc genhtml_function_coverage=1 00:04:01.857 --rc genhtml_legend=1 00:04:01.857 --rc geninfo_all_blocks=1 00:04:01.857 --rc geninfo_unexecuted_blocks=1 00:04:01.857 00:04:01.857 ' 00:04:01.857 10:17:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:01.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.857 --rc genhtml_branch_coverage=1 00:04:01.857 --rc genhtml_function_coverage=1 00:04:01.857 --rc genhtml_legend=1 00:04:01.857 --rc geninfo_all_blocks=1 00:04:01.857 --rc geninfo_unexecuted_blocks=1 00:04:01.857 00:04:01.857 ' 00:04:02.123 10:17:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.123 --rc genhtml_branch_coverage=1 00:04:02.123 --rc genhtml_function_coverage=1 00:04:02.123 --rc genhtml_legend=1 00:04:02.123 --rc geninfo_all_blocks=1 00:04:02.123 --rc geninfo_unexecuted_blocks=1 00:04:02.123 00:04:02.123 ' 00:04:02.123 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:02.123 10:17:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:02.123 10:17:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.123 10:17:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.123 10:17:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.123 10:17:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.123 10:17:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.123 10:17:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.123 10:17:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.124 10:17:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:02.124 10:17:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.124 10:17:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.124 10:17:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.124 10:17:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.124 10:17:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.124 10:17:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.124 10:17:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:02.124 10:17:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:02.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:02.124 10:17:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:02.124 INFO: launching applications... 00:04:02.124 10:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1327960 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:02.124 Waiting for target to run... 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1327960 /var/tmp/spdk_tgt.sock 00:04:02.124 10:17:35 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1327960 ']' 00:04:02.124 10:17:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:02.124 10:17:35 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.124 10:17:35 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.124 10:17:35 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.124 10:17:35 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.124 10:17:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:02.124 [2024-12-12 10:17:35.969886] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:02.124 [2024-12-12 10:17:35.969930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327960 ] 00:04:02.691 [2024-12-12 10:17:36.422470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.691 [2024-12-12 10:17:36.479691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.949 10:17:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.949 10:17:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:02.949 10:17:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:02.949 00:04:02.949 10:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:02.949 INFO: shutting down applications... 00:04:02.949 10:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:02.949 10:17:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:02.949 10:17:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:02.950 10:17:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1327960 ]] 00:04:02.950 10:17:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1327960 00:04:02.950 10:17:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:02.950 10:17:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.950 10:17:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1327960 00:04:02.950 10:17:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:03.517 10:17:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:03.517 10:17:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.517 10:17:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1327960 00:04:03.517 10:17:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:03.517 10:17:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:03.517 10:17:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:03.517 10:17:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:03.517 SPDK target shutdown done 00:04:03.517 10:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:03.517 Success 00:04:03.517 00:04:03.517 real 0m1.585s 00:04:03.517 user 0m1.205s 00:04:03.517 sys 0m0.568s 00:04:03.517 10:17:37 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.517 10:17:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:03.517 ************************************ 00:04:03.517 END TEST json_config_extra_key 00:04:03.517 ************************************ 00:04:03.517 10:17:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:03.517 10:17:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.517 10:17:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.517 10:17:37 -- common/autotest_common.sh@10 -- # set +x 00:04:03.517 ************************************ 00:04:03.517 START TEST alias_rpc 00:04:03.517 ************************************ 00:04:03.517 10:17:37 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:03.517 * Looking for test storage... 00:04:03.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:03.517 10:17:37 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:03.517 10:17:37 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:03.517 10:17:37 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:03.517 10:17:37 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:03.517 10:17:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.517 10:17:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.517 10:17:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.776 10:17:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:03.776 10:17:37 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.776 10:17:37 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:03.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.776 --rc genhtml_branch_coverage=1 00:04:03.777 --rc genhtml_function_coverage=1 00:04:03.777 --rc genhtml_legend=1 00:04:03.777 --rc geninfo_all_blocks=1 00:04:03.777 --rc geninfo_unexecuted_blocks=1 00:04:03.777 00:04:03.777 ' 00:04:03.777 10:17:37 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:03.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.777 --rc genhtml_branch_coverage=1 00:04:03.777 --rc genhtml_function_coverage=1 00:04:03.777 --rc genhtml_legend=1 00:04:03.777 --rc geninfo_all_blocks=1 00:04:03.777 --rc geninfo_unexecuted_blocks=1 00:04:03.777 00:04:03.777 ' 00:04:03.777 10:17:37 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:03.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.777 --rc genhtml_branch_coverage=1 00:04:03.777 --rc genhtml_function_coverage=1 00:04:03.777 --rc genhtml_legend=1 00:04:03.777 --rc geninfo_all_blocks=1 00:04:03.777 --rc geninfo_unexecuted_blocks=1 00:04:03.777 00:04:03.777 ' 00:04:03.777 10:17:37 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:03.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.777 --rc genhtml_branch_coverage=1 00:04:03.777 --rc genhtml_function_coverage=1 00:04:03.777 --rc genhtml_legend=1 00:04:03.777 --rc geninfo_all_blocks=1 00:04:03.777 --rc geninfo_unexecuted_blocks=1 00:04:03.777 00:04:03.777 ' 00:04:03.777 10:17:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:03.777 10:17:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1328244 00:04:03.777 10:17:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1328244 00:04:03.777 10:17:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.777 10:17:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1328244 ']' 00:04:03.777 10:17:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.777 10:17:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.777 10:17:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.777 10:17:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.777 10:17:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.777 [2024-12-12 10:17:37.608066] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:03.777 [2024-12-12 10:17:37.608111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328244 ] 00:04:03.777 [2024-12-12 10:17:37.682318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.777 [2024-12-12 10:17:37.722460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.711 10:17:38 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.711 10:17:38 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:04.711 10:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:04.711 10:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1328244 00:04:04.711 10:17:38 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1328244 ']' 00:04:04.712 10:17:38 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1328244 00:04:04.712 10:17:38 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:04.712 10:17:38 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.712 10:17:38 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1328244 00:04:04.712 10:17:38 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.712 10:17:38 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.712 10:17:38 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1328244' 00:04:04.712 killing process with pid 1328244 00:04:04.712 10:17:38 alias_rpc -- common/autotest_common.sh@973 -- # kill 1328244 00:04:04.712 10:17:38 alias_rpc -- common/autotest_common.sh@978 -- # wait 1328244 00:04:05.280 00:04:05.280 real 0m1.619s 00:04:05.280 user 0m1.785s 00:04:05.280 sys 0m0.437s 00:04:05.280 10:17:39 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.280 10:17:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.280 ************************************ 00:04:05.280 END TEST alias_rpc 00:04:05.280 ************************************ 00:04:05.280 10:17:39 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:05.280 10:17:39 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:05.280 10:17:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.280 10:17:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.280 10:17:39 -- common/autotest_common.sh@10 -- # set +x 00:04:05.280 ************************************ 00:04:05.280 START TEST spdkcli_tcp 00:04:05.280 ************************************ 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:05.280 * Looking for test storage... 00:04:05.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.280 10:17:39 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.280 --rc genhtml_branch_coverage=1 00:04:05.280 --rc genhtml_function_coverage=1 00:04:05.280 --rc genhtml_legend=1 00:04:05.280 --rc geninfo_all_blocks=1 00:04:05.280 --rc geninfo_unexecuted_blocks=1 00:04:05.280 00:04:05.280 ' 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.280 --rc genhtml_branch_coverage=1 00:04:05.280 --rc genhtml_function_coverage=1 00:04:05.280 --rc genhtml_legend=1 00:04:05.280 --rc geninfo_all_blocks=1 00:04:05.280 --rc geninfo_unexecuted_blocks=1 00:04:05.280 00:04:05.280 ' 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.280 --rc genhtml_branch_coverage=1 00:04:05.280 --rc genhtml_function_coverage=1 00:04:05.280 --rc genhtml_legend=1 00:04:05.280 --rc geninfo_all_blocks=1 00:04:05.280 --rc geninfo_unexecuted_blocks=1 00:04:05.280 00:04:05.280 ' 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.280 --rc genhtml_branch_coverage=1 00:04:05.280 --rc genhtml_function_coverage=1 00:04:05.280 --rc genhtml_legend=1 00:04:05.280 --rc geninfo_all_blocks=1 00:04:05.280 --rc geninfo_unexecuted_blocks=1 00:04:05.280 00:04:05.280 ' 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1328595 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1328595 00:04:05.280 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1328595 ']' 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.280 10:17:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.539 [2024-12-12 10:17:39.304032] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:05.539 [2024-12-12 10:17:39.304082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328595 ] 00:04:05.539 [2024-12-12 10:17:39.379787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:05.539 [2024-12-12 10:17:39.422942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.539 [2024-12-12 10:17:39.422943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.797 10:17:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.797 10:17:39 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:05.797 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1328749 00:04:05.797 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:05.797 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:05.797 [ 00:04:05.797 "bdev_malloc_delete", 00:04:05.797 "bdev_malloc_create", 00:04:05.797 "bdev_null_resize", 00:04:05.797 "bdev_null_delete", 00:04:05.797 "bdev_null_create", 00:04:05.797 "bdev_nvme_cuse_unregister", 00:04:05.797 "bdev_nvme_cuse_register", 00:04:05.797 "bdev_opal_new_user", 00:04:05.797 "bdev_opal_set_lock_state", 00:04:05.797 "bdev_opal_delete", 00:04:05.797 "bdev_opal_get_info", 00:04:05.797 "bdev_opal_create", 00:04:05.797 "bdev_nvme_opal_revert", 00:04:05.797 "bdev_nvme_opal_init", 00:04:05.797 "bdev_nvme_send_cmd", 00:04:05.797 "bdev_nvme_set_keys", 00:04:05.797 "bdev_nvme_get_path_iostat", 00:04:05.798 "bdev_nvme_get_mdns_discovery_info", 00:04:05.798 "bdev_nvme_stop_mdns_discovery", 00:04:05.798 "bdev_nvme_start_mdns_discovery", 00:04:05.798 "bdev_nvme_set_multipath_policy", 00:04:05.798 "bdev_nvme_set_preferred_path", 00:04:05.798 "bdev_nvme_get_io_paths", 00:04:05.798 "bdev_nvme_remove_error_injection", 00:04:05.798 "bdev_nvme_add_error_injection", 00:04:05.798 "bdev_nvme_get_discovery_info", 00:04:05.798 "bdev_nvme_stop_discovery", 00:04:05.798 "bdev_nvme_start_discovery", 00:04:05.798 "bdev_nvme_get_controller_health_info", 00:04:05.798 "bdev_nvme_disable_controller", 00:04:05.798 "bdev_nvme_enable_controller", 00:04:05.798 "bdev_nvme_reset_controller", 00:04:05.798 "bdev_nvme_get_transport_statistics", 00:04:05.798 "bdev_nvme_apply_firmware", 00:04:05.798 "bdev_nvme_detach_controller", 00:04:05.798 "bdev_nvme_get_controllers", 00:04:05.798 "bdev_nvme_attach_controller", 00:04:05.798 "bdev_nvme_set_hotplug", 00:04:05.798 "bdev_nvme_set_options", 00:04:05.798 "bdev_passthru_delete", 00:04:05.798 "bdev_passthru_create", 00:04:05.798 "bdev_lvol_set_parent_bdev", 00:04:05.798 "bdev_lvol_set_parent", 00:04:05.798 "bdev_lvol_check_shallow_copy", 00:04:05.798 "bdev_lvol_start_shallow_copy", 00:04:05.798 "bdev_lvol_grow_lvstore", 00:04:05.798 "bdev_lvol_get_lvols", 00:04:05.798 "bdev_lvol_get_lvstores", 00:04:05.798 "bdev_lvol_delete", 00:04:05.798 "bdev_lvol_set_read_only", 00:04:05.798 "bdev_lvol_resize", 00:04:05.798 "bdev_lvol_decouple_parent", 00:04:05.798 "bdev_lvol_inflate", 00:04:05.798 "bdev_lvol_rename", 00:04:05.798 "bdev_lvol_clone_bdev", 00:04:05.798 "bdev_lvol_clone", 00:04:05.798 "bdev_lvol_snapshot", 00:04:05.798 "bdev_lvol_create", 00:04:05.798 "bdev_lvol_delete_lvstore", 00:04:05.798 "bdev_lvol_rename_lvstore", 00:04:05.798 "bdev_lvol_create_lvstore", 00:04:05.798 "bdev_raid_set_options", 00:04:05.798 "bdev_raid_remove_base_bdev", 00:04:05.798 "bdev_raid_add_base_bdev", 00:04:05.798 "bdev_raid_delete", 00:04:05.798 "bdev_raid_create", 00:04:05.798 "bdev_raid_get_bdevs", 00:04:05.798 "bdev_error_inject_error", 00:04:05.798 "bdev_error_delete", 00:04:05.798 "bdev_error_create", 00:04:05.798 "bdev_split_delete", 00:04:05.798 "bdev_split_create", 00:04:05.798 "bdev_delay_delete", 00:04:05.798 "bdev_delay_create", 00:04:05.798 "bdev_delay_update_latency", 00:04:05.798 "bdev_zone_block_delete", 00:04:05.798 "bdev_zone_block_create", 00:04:05.798 "blobfs_create", 00:04:05.798 "blobfs_detect", 00:04:05.798 "blobfs_set_cache_size", 00:04:05.798 "bdev_aio_delete", 00:04:05.798 "bdev_aio_rescan", 00:04:05.798 "bdev_aio_create", 00:04:05.798 "bdev_ftl_set_property", 00:04:05.798 "bdev_ftl_get_properties", 00:04:05.798 "bdev_ftl_get_stats", 00:04:05.798 "bdev_ftl_unmap", 00:04:05.798 "bdev_ftl_unload", 00:04:05.798 "bdev_ftl_delete", 00:04:05.798 "bdev_ftl_load", 00:04:05.798 "bdev_ftl_create", 00:04:05.798 "bdev_virtio_attach_controller", 00:04:05.798 "bdev_virtio_scsi_get_devices", 00:04:05.798 "bdev_virtio_detach_controller", 00:04:05.798 "bdev_virtio_blk_set_hotplug", 00:04:05.798 "bdev_iscsi_delete", 00:04:05.798 "bdev_iscsi_create", 00:04:05.798 "bdev_iscsi_set_options", 00:04:05.798 "accel_error_inject_error", 00:04:05.798 "ioat_scan_accel_module", 00:04:05.798 "dsa_scan_accel_module", 00:04:05.798 "iaa_scan_accel_module", 00:04:05.798 "vfu_virtio_create_fs_endpoint", 00:04:05.798 "vfu_virtio_create_scsi_endpoint", 00:04:05.798 "vfu_virtio_scsi_remove_target", 00:04:05.798 "vfu_virtio_scsi_add_target", 00:04:05.798 "vfu_virtio_create_blk_endpoint", 00:04:05.798 "vfu_virtio_delete_endpoint", 00:04:05.798 "keyring_file_remove_key", 00:04:05.798 "keyring_file_add_key", 00:04:05.798 "keyring_linux_set_options", 00:04:05.798 "fsdev_aio_delete", 00:04:05.798 "fsdev_aio_create", 00:04:05.798 "iscsi_get_histogram", 00:04:05.798 "iscsi_enable_histogram", 00:04:05.798 "iscsi_set_options", 00:04:05.798 "iscsi_get_auth_groups", 00:04:05.798 "iscsi_auth_group_remove_secret", 00:04:05.798 "iscsi_auth_group_add_secret", 00:04:05.798 "iscsi_delete_auth_group", 00:04:05.798 "iscsi_create_auth_group", 00:04:05.798 "iscsi_set_discovery_auth", 00:04:05.798 "iscsi_get_options", 00:04:05.798 "iscsi_target_node_request_logout", 00:04:05.798 "iscsi_target_node_set_redirect", 00:04:05.798 "iscsi_target_node_set_auth", 00:04:05.798 "iscsi_target_node_add_lun", 00:04:05.798 "iscsi_get_stats", 00:04:05.798 "iscsi_get_connections", 00:04:05.798 "iscsi_portal_group_set_auth", 00:04:05.798 "iscsi_start_portal_group", 00:04:05.798 "iscsi_delete_portal_group", 00:04:05.798 "iscsi_create_portal_group", 00:04:05.798 "iscsi_get_portal_groups", 00:04:05.798 "iscsi_delete_target_node", 00:04:05.798 "iscsi_target_node_remove_pg_ig_maps", 00:04:05.798 "iscsi_target_node_add_pg_ig_maps", 00:04:05.798 "iscsi_create_target_node", 00:04:05.798 "iscsi_get_target_nodes", 00:04:05.798 "iscsi_delete_initiator_group", 00:04:05.798 "iscsi_initiator_group_remove_initiators", 00:04:05.798 "iscsi_initiator_group_add_initiators", 00:04:05.798 "iscsi_create_initiator_group", 00:04:05.798 "iscsi_get_initiator_groups", 00:04:05.798 "nvmf_set_crdt", 00:04:05.798 "nvmf_set_config", 00:04:05.798 "nvmf_set_max_subsystems", 00:04:05.798 "nvmf_stop_mdns_prr", 00:04:05.798 "nvmf_publish_mdns_prr", 00:04:05.798 "nvmf_subsystem_get_listeners", 00:04:05.798 "nvmf_subsystem_get_qpairs", 00:04:05.798 "nvmf_subsystem_get_controllers", 00:04:05.798 "nvmf_get_stats", 00:04:05.798 "nvmf_get_transports", 00:04:05.798 "nvmf_create_transport", 00:04:05.798 "nvmf_get_targets", 00:04:05.798 "nvmf_delete_target", 00:04:05.798 "nvmf_create_target", 00:04:05.798 "nvmf_subsystem_allow_any_host", 00:04:05.798 "nvmf_subsystem_set_keys", 00:04:05.798 "nvmf_subsystem_remove_host", 00:04:05.798 "nvmf_subsystem_add_host", 00:04:05.798 "nvmf_ns_remove_host", 00:04:05.798 "nvmf_ns_add_host", 00:04:05.798 "nvmf_subsystem_remove_ns", 00:04:05.798 "nvmf_subsystem_set_ns_ana_group", 00:04:05.798 "nvmf_subsystem_add_ns", 00:04:05.798 "nvmf_subsystem_listener_set_ana_state", 00:04:05.798 "nvmf_discovery_get_referrals", 00:04:05.798 "nvmf_discovery_remove_referral", 00:04:05.798 "nvmf_discovery_add_referral", 00:04:05.798 "nvmf_subsystem_remove_listener", 00:04:05.798 "nvmf_subsystem_add_listener", 00:04:05.798 "nvmf_delete_subsystem", 00:04:05.798 "nvmf_create_subsystem", 00:04:05.798 "nvmf_get_subsystems", 00:04:05.798 "env_dpdk_get_mem_stats", 00:04:05.798 "nbd_get_disks", 00:04:05.798 "nbd_stop_disk", 00:04:05.798 "nbd_start_disk", 00:04:05.798 "ublk_recover_disk", 00:04:05.798 "ublk_get_disks", 00:04:05.798 "ublk_stop_disk", 00:04:05.798 "ublk_start_disk", 00:04:05.798 "ublk_destroy_target", 00:04:05.798 "ublk_create_target", 00:04:05.798 "virtio_blk_create_transport", 00:04:05.798 "virtio_blk_get_transports", 00:04:05.798 "vhost_controller_set_coalescing", 00:04:05.798 "vhost_get_controllers", 00:04:05.798 "vhost_delete_controller", 00:04:05.798 "vhost_create_blk_controller", 00:04:05.798 "vhost_scsi_controller_remove_target", 00:04:05.798 "vhost_scsi_controller_add_target", 00:04:05.798 "vhost_start_scsi_controller", 00:04:05.798 "vhost_create_scsi_controller", 00:04:05.798 "thread_set_cpumask", 00:04:05.798 "scheduler_set_options", 00:04:05.798 "framework_get_governor", 00:04:05.798 "framework_get_scheduler", 00:04:05.798 "framework_set_scheduler", 00:04:05.798 "framework_get_reactors", 00:04:05.798 "thread_get_io_channels", 00:04:05.798 "thread_get_pollers", 00:04:05.798 "thread_get_stats", 00:04:05.798 "framework_monitor_context_switch", 00:04:05.798 "spdk_kill_instance", 00:04:05.798 "log_enable_timestamps", 00:04:05.798 "log_get_flags", 00:04:05.798 "log_clear_flag", 00:04:05.798 "log_set_flag", 00:04:05.798 "log_get_level", 00:04:05.798 "log_set_level", 00:04:05.798 "log_get_print_level", 00:04:05.798 "log_set_print_level", 00:04:05.798 "framework_enable_cpumask_locks", 00:04:05.798 "framework_disable_cpumask_locks", 00:04:05.798 "framework_wait_init", 00:04:05.798 "framework_start_init", 00:04:05.798 "scsi_get_devices", 00:04:05.798 "bdev_get_histogram", 00:04:05.798 "bdev_enable_histogram", 00:04:05.798 "bdev_set_qos_limit", 00:04:05.798 "bdev_set_qd_sampling_period", 00:04:05.798 "bdev_get_bdevs", 00:04:05.798 "bdev_reset_iostat", 00:04:05.798 "bdev_get_iostat", 00:04:05.798 "bdev_examine", 00:04:05.798 "bdev_wait_for_examine", 00:04:05.798 "bdev_set_options", 00:04:05.798 "accel_get_stats", 00:04:05.798 "accel_set_options", 00:04:05.798 "accel_set_driver", 00:04:05.798 "accel_crypto_key_destroy", 00:04:05.798 "accel_crypto_keys_get", 00:04:05.798 "accel_crypto_key_create", 00:04:05.798 "accel_assign_opc", 00:04:05.798 "accel_get_module_info", 00:04:05.798 "accel_get_opc_assignments", 00:04:05.798 "vmd_rescan", 00:04:05.798 "vmd_remove_device", 00:04:05.798 "vmd_enable", 00:04:05.798 "sock_get_default_impl", 00:04:05.799 "sock_set_default_impl", 00:04:05.799 "sock_impl_set_options", 00:04:05.799 "sock_impl_get_options", 00:04:05.799 "iobuf_get_stats", 00:04:05.799 "iobuf_set_options", 00:04:05.799 "keyring_get_keys", 00:04:05.799 "vfu_tgt_set_base_path", 00:04:05.799 "framework_get_pci_devices", 00:04:05.799 "framework_get_config", 00:04:05.799 "framework_get_subsystems", 00:04:05.799 "fsdev_set_opts", 00:04:05.799 "fsdev_get_opts", 00:04:05.799 "trace_get_info", 00:04:05.799 "trace_get_tpoint_group_mask", 00:04:05.799 "trace_disable_tpoint_group", 00:04:05.799 "trace_enable_tpoint_group", 00:04:05.799 "trace_clear_tpoint_mask", 00:04:05.799 "trace_set_tpoint_mask", 00:04:05.799 "notify_get_notifications", 00:04:05.799 "notify_get_types", 00:04:05.799 "spdk_get_version", 00:04:05.799 "rpc_get_methods" 00:04:05.799 ] 00:04:06.057 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:06.057 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:06.057 10:17:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1328595 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1328595 ']' 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1328595 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1328595 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1328595' 00:04:06.057 killing process with pid 1328595 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1328595 00:04:06.057 10:17:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1328595 00:04:06.316 00:04:06.316 real 0m1.150s 00:04:06.316 user 0m1.928s 00:04:06.316 sys 0m0.436s 00:04:06.316 10:17:40 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.316 10:17:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:06.316 ************************************ 00:04:06.316 END TEST spdkcli_tcp 00:04:06.316 ************************************ 00:04:06.316 10:17:40 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:06.316 10:17:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.316 10:17:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.316 10:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:06.316 ************************************ 00:04:06.316 START TEST dpdk_mem_utility 00:04:06.316 ************************************ 00:04:06.316 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:06.575 * Looking for test storage... 00:04:06.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:06.575 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:06.575 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:06.575 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:06.575 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.575 10:17:40 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:06.575 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.575 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:06.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.575 --rc genhtml_branch_coverage=1 00:04:06.575 --rc genhtml_function_coverage=1 00:04:06.575 --rc genhtml_legend=1 00:04:06.575 --rc geninfo_all_blocks=1 00:04:06.575 --rc geninfo_unexecuted_blocks=1 00:04:06.575 00:04:06.575 ' 00:04:06.575 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:06.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.575 --rc genhtml_branch_coverage=1 00:04:06.575 --rc genhtml_function_coverage=1 00:04:06.575 --rc genhtml_legend=1 00:04:06.575 --rc geninfo_all_blocks=1 00:04:06.575 --rc geninfo_unexecuted_blocks=1 00:04:06.575 00:04:06.575 ' 00:04:06.575 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:06.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.575 --rc genhtml_branch_coverage=1 00:04:06.575 --rc genhtml_function_coverage=1 00:04:06.575 --rc genhtml_legend=1 00:04:06.575 --rc geninfo_all_blocks=1 00:04:06.575 --rc geninfo_unexecuted_blocks=1 00:04:06.575 00:04:06.575 ' 00:04:06.575 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:06.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.575 --rc genhtml_branch_coverage=1 00:04:06.575 --rc genhtml_function_coverage=1 00:04:06.575 --rc genhtml_legend=1 00:04:06.575 --rc geninfo_all_blocks=1 00:04:06.575 --rc geninfo_unexecuted_blocks=1 00:04:06.575 00:04:06.575 ' 00:04:06.575 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:06.575 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1328834 00:04:06.576 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1328834 00:04:06.576 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.576 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1328834 ']' 00:04:06.576 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.576 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.576 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.576 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.576 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.576 [2024-12-12 10:17:40.505437] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:06.576 [2024-12-12 10:17:40.505486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328834 ] 00:04:06.576 [2024-12-12 10:17:40.581432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.834 [2024-12-12 10:17:40.624070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.834 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.834 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:06.834 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:06.834 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:06.834 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.834 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.834 { 00:04:06.834 "filename": "/tmp/spdk_mem_dump.txt" 00:04:06.834 } 00:04:06.834 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.834 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:07.094 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:07.094 1 heaps totaling size 818.000000 MiB 00:04:07.094 size: 818.000000 MiB heap id: 0 00:04:07.094 end heaps---------- 00:04:07.094 9 mempools totaling size 603.782043 MiB 00:04:07.094 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:07.094 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:07.094 size: 100.555481 MiB name: bdev_io_1328834 00:04:07.094 size: 50.003479 MiB name: msgpool_1328834 00:04:07.094 size: 36.509338 MiB name: fsdev_io_1328834 00:04:07.094 size: 21.763794 MiB name: PDU_Pool 00:04:07.094 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:07.094 size: 4.133484 MiB name: evtpool_1328834 00:04:07.094 size: 0.026123 MiB name: Session_Pool 00:04:07.094 end mempools------- 00:04:07.094 6 memzones totaling size 4.142822 MiB 00:04:07.094 size: 1.000366 MiB name: RG_ring_0_1328834 00:04:07.094 size: 1.000366 MiB name: RG_ring_1_1328834 00:04:07.094 size: 1.000366 MiB name: RG_ring_4_1328834 00:04:07.094 size: 1.000366 MiB name: RG_ring_5_1328834 00:04:07.094 size: 0.125366 MiB name: RG_ring_2_1328834 00:04:07.094 size: 0.015991 MiB name: RG_ring_3_1328834 00:04:07.094 end memzones------- 00:04:07.094 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:07.094 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:07.094 list of free elements. size: 10.852478 MiB 00:04:07.094 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:07.094 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:07.094 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:07.094 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:07.094 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:07.094 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:07.094 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:07.094 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:07.094 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:07.094 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:07.094 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:07.094 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:07.094 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:07.094 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:07.094 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:07.094 list of standard malloc elements. size: 199.218628 MiB 00:04:07.094 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:07.094 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:07.094 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:07.094 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:07.094 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:07.094 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:07.094 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:07.094 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:07.094 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:07.094 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:07.094 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:07.094 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:07.094 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:07.094 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:07.094 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:07.094 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:07.094 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:07.094 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:07.094 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:07.094 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:07.094 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:07.094 list of memzone associated elements. size: 607.928894 MiB 00:04:07.094 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:07.094 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:07.094 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:07.094 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:07.094 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:07.094 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1328834_0 00:04:07.094 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:07.094 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1328834_0 00:04:07.094 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:07.094 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1328834_0 00:04:07.094 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:07.094 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:07.094 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:07.094 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:07.094 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:07.094 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1328834_0 00:04:07.094 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:07.094 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1328834 00:04:07.094 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:07.094 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1328834 00:04:07.094 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:07.094 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:07.094 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:07.094 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:07.094 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:07.094 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:07.094 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:07.094 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:07.094 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:07.094 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1328834 00:04:07.094 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:07.094 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1328834 00:04:07.094 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:07.094 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1328834 00:04:07.094 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:07.094 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1328834 00:04:07.094 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:07.094 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1328834 00:04:07.094 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:07.094 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1328834 00:04:07.094 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:07.094 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:07.094 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:07.094 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:07.094 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:07.094 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:07.094 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:07.094 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1328834 00:04:07.094 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:07.094 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1328834 00:04:07.094 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:07.094 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:07.094 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:07.094 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:07.094 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:07.094 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1328834 00:04:07.094 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:07.094 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:07.094 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:07.094 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1328834 00:04:07.095 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:07.095 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1328834 00:04:07.095 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:07.095 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1328834 00:04:07.095 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:07.095 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:07.095 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:07.095 10:17:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1328834 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1328834 ']' 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1328834 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1328834 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1328834' 00:04:07.095 killing process with pid 1328834 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1328834 00:04:07.095 10:17:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1328834 00:04:07.354 00:04:07.354 real 0m0.991s 00:04:07.354 user 0m0.925s 00:04:07.354 sys 0m0.399s 00:04:07.354 10:17:41 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.354 10:17:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:07.354 ************************************ 00:04:07.354 END TEST dpdk_mem_utility 00:04:07.354 ************************************ 00:04:07.354 10:17:41 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:07.354 10:17:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.354 10:17:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.354 10:17:41 -- common/autotest_common.sh@10 -- # set +x 00:04:07.354 ************************************ 00:04:07.354 START TEST event 00:04:07.354 ************************************ 00:04:07.354 10:17:41 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:07.613 * Looking for test storage... 00:04:07.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.613 10:17:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.613 10:17:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.613 10:17:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.613 10:17:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.613 10:17:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.613 10:17:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.613 10:17:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.613 10:17:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.613 10:17:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.613 10:17:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.613 10:17:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.613 10:17:41 event -- scripts/common.sh@344 -- # case "$op" in 00:04:07.613 10:17:41 event -- scripts/common.sh@345 -- # : 1 00:04:07.613 10:17:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.613 10:17:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.613 10:17:41 event -- scripts/common.sh@365 -- # decimal 1 00:04:07.613 10:17:41 event -- scripts/common.sh@353 -- # local d=1 00:04:07.613 10:17:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.613 10:17:41 event -- scripts/common.sh@355 -- # echo 1 00:04:07.613 10:17:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.613 10:17:41 event -- scripts/common.sh@366 -- # decimal 2 00:04:07.613 10:17:41 event -- scripts/common.sh@353 -- # local d=2 00:04:07.613 10:17:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.613 10:17:41 event -- scripts/common.sh@355 -- # echo 2 00:04:07.613 10:17:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.613 10:17:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.613 10:17:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.613 10:17:41 event -- scripts/common.sh@368 -- # return 0 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.613 --rc genhtml_branch_coverage=1 00:04:07.613 --rc genhtml_function_coverage=1 00:04:07.613 --rc genhtml_legend=1 00:04:07.613 --rc geninfo_all_blocks=1 00:04:07.613 --rc geninfo_unexecuted_blocks=1 00:04:07.613 00:04:07.613 ' 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.613 --rc genhtml_branch_coverage=1 00:04:07.613 --rc genhtml_function_coverage=1 00:04:07.613 --rc genhtml_legend=1 00:04:07.613 --rc geninfo_all_blocks=1 00:04:07.613 --rc geninfo_unexecuted_blocks=1 00:04:07.613 00:04:07.613 ' 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.613 --rc genhtml_branch_coverage=1 00:04:07.613 --rc genhtml_function_coverage=1 00:04:07.613 --rc genhtml_legend=1 00:04:07.613 --rc geninfo_all_blocks=1 00:04:07.613 --rc geninfo_unexecuted_blocks=1 00:04:07.613 00:04:07.613 ' 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.613 --rc genhtml_branch_coverage=1 00:04:07.613 --rc genhtml_function_coverage=1 00:04:07.613 --rc genhtml_legend=1 00:04:07.613 --rc geninfo_all_blocks=1 00:04:07.613 --rc geninfo_unexecuted_blocks=1 00:04:07.613 00:04:07.613 ' 00:04:07.613 10:17:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:07.613 10:17:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:07.613 10:17:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:07.613 10:17:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.613 10:17:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.613 ************************************ 00:04:07.613 START TEST event_perf 00:04:07.613 ************************************ 00:04:07.613 10:17:41 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:07.613 Running I/O for 1 seconds...[2024-12-12 10:17:41.575911] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:07.613 [2024-12-12 10:17:41.575979] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329118 ] 00:04:07.872 [2024-12-12 10:17:41.654031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:07.872 [2024-12-12 10:17:41.697883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.872 [2024-12-12 10:17:41.697994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:07.872 [2024-12-12 10:17:41.698099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.872 [2024-12-12 10:17:41.698100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:08.807 Running I/O for 1 seconds... 00:04:08.807 lcore 0: 201033 00:04:08.807 lcore 1: 201033 00:04:08.807 lcore 2: 201032 00:04:08.807 lcore 3: 201033 00:04:08.807 done. 00:04:08.807 00:04:08.807 real 0m1.183s 00:04:08.807 user 0m4.092s 00:04:08.807 sys 0m0.088s 00:04:08.807 10:17:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.807 10:17:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:08.807 ************************************ 00:04:08.807 END TEST event_perf 00:04:08.807 ************************************ 00:04:08.807 10:17:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:08.807 10:17:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:08.807 10:17:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.807 10:17:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:08.807 ************************************ 00:04:08.807 START TEST event_reactor 00:04:08.807 ************************************ 00:04:08.807 10:17:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:08.807 [2024-12-12 10:17:42.823904] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:08.807 [2024-12-12 10:17:42.823966] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329362 ] 00:04:09.066 [2024-12-12 10:17:42.899789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.066 [2024-12-12 10:17:42.939204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.003 test_start 00:04:10.003 oneshot 00:04:10.003 tick 100 00:04:10.003 tick 100 00:04:10.003 tick 250 00:04:10.003 tick 100 00:04:10.003 tick 100 00:04:10.003 tick 250 00:04:10.003 tick 100 00:04:10.003 tick 500 00:04:10.003 tick 100 00:04:10.003 tick 100 00:04:10.003 tick 250 00:04:10.003 tick 100 00:04:10.003 tick 100 00:04:10.003 test_end 00:04:10.003 00:04:10.003 real 0m1.173s 00:04:10.003 user 0m1.102s 00:04:10.003 sys 0m0.067s 00:04:10.003 10:17:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.003 10:17:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:10.003 ************************************ 00:04:10.003 END TEST event_reactor 00:04:10.003 ************************************ 00:04:10.003 10:17:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:10.003 10:17:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:10.003 10:17:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.003 10:17:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.262 ************************************ 00:04:10.262 START TEST event_reactor_perf 00:04:10.262 ************************************ 00:04:10.262 10:17:44 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:10.262 [2024-12-12 10:17:44.063810] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:10.262 [2024-12-12 10:17:44.063884] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329610 ] 00:04:10.262 [2024-12-12 10:17:44.141576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.262 [2024-12-12 10:17:44.182221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.198 test_start 00:04:11.198 test_end 00:04:11.198 Performance: 517393 events per second 00:04:11.198 00:04:11.198 real 0m1.175s 00:04:11.198 user 0m1.093s 00:04:11.198 sys 0m0.078s 00:04:11.198 10:17:45 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.198 10:17:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.198 ************************************ 00:04:11.198 END TEST event_reactor_perf 00:04:11.198 ************************************ 00:04:11.457 10:17:45 event -- event/event.sh@49 -- # uname -s 00:04:11.457 10:17:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:11.457 10:17:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:11.457 10:17:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.457 10:17:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.457 10:17:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.457 ************************************ 00:04:11.457 START TEST event_scheduler 00:04:11.457 ************************************ 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:11.457 * Looking for test storage... 00:04:11.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.457 10:17:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.457 --rc genhtml_branch_coverage=1 00:04:11.457 --rc genhtml_function_coverage=1 00:04:11.457 --rc genhtml_legend=1 00:04:11.457 --rc geninfo_all_blocks=1 00:04:11.457 --rc geninfo_unexecuted_blocks=1 00:04:11.457 00:04:11.457 ' 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.457 --rc genhtml_branch_coverage=1 00:04:11.457 --rc genhtml_function_coverage=1 00:04:11.457 --rc genhtml_legend=1 00:04:11.457 --rc geninfo_all_blocks=1 00:04:11.457 --rc geninfo_unexecuted_blocks=1 00:04:11.457 00:04:11.457 ' 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.457 --rc genhtml_branch_coverage=1 00:04:11.457 --rc genhtml_function_coverage=1 00:04:11.457 --rc genhtml_legend=1 00:04:11.457 --rc geninfo_all_blocks=1 00:04:11.457 --rc geninfo_unexecuted_blocks=1 00:04:11.457 00:04:11.457 ' 00:04:11.457 10:17:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.457 --rc genhtml_branch_coverage=1 00:04:11.457 --rc genhtml_function_coverage=1 00:04:11.457 --rc genhtml_legend=1 00:04:11.458 --rc geninfo_all_blocks=1 00:04:11.458 --rc geninfo_unexecuted_blocks=1 00:04:11.458 00:04:11.458 ' 00:04:11.458 10:17:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:11.458 10:17:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1329892 00:04:11.458 10:17:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.458 10:17:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:11.458 10:17:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1329892 00:04:11.458 10:17:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1329892 ']' 00:04:11.458 10:17:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.458 10:17:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.458 10:17:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.458 10:17:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.458 10:17:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.717 [2024-12-12 10:17:45.513242] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:11.717 [2024-12-12 10:17:45.513291] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329892 ] 00:04:11.717 [2024-12-12 10:17:45.587460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.717 [2024-12-12 10:17:45.629616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.717 [2024-12-12 10:17:45.629669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.717 [2024-12-12 10:17:45.629774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.717 [2024-12-12 10:17:45.629789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.717 10:17:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.717 10:17:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:11.717 10:17:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:11.717 10:17:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.717 10:17:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.717 [2024-12-12 10:17:45.674438] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:11.717 [2024-12-12 10:17:45.674457] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:11.717 [2024-12-12 10:17:45.674466] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:11.717 [2024-12-12 10:17:45.674471] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:11.717 [2024-12-12 10:17:45.674477] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:11.717 10:17:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.717 10:17:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:11.717 10:17:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.717 10:17:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 [2024-12-12 10:17:45.750040] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:11.976 10:17:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:11.976 10:17:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.976 10:17:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 ************************************ 00:04:11.976 START TEST scheduler_create_thread 00:04:11.976 ************************************ 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 2 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 3 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 4 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 5 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 6 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 7 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 8 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 9 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 10 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.976 10:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.543 10:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.544 10:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:12.544 10:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.544 10:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.919 10:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.919 10:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:13.919 10:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:13.919 10:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.919 10:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.295 10:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.295 00:04:15.295 real 0m3.102s 00:04:15.295 user 0m0.021s 00:04:15.295 sys 0m0.008s 00:04:15.295 10:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.295 10:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.295 ************************************ 00:04:15.295 END TEST scheduler_create_thread 00:04:15.295 ************************************ 00:04:15.295 10:17:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:15.295 10:17:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1329892 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1329892 ']' 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1329892 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1329892 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1329892' 00:04:15.295 killing process with pid 1329892 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1329892 00:04:15.295 10:17:48 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1329892 00:04:15.295 [2024-12-12 10:17:49.265295] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:15.555 00:04:15.555 real 0m4.164s 00:04:15.555 user 0m6.643s 00:04:15.555 sys 0m0.387s 00:04:15.555 10:17:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.555 10:17:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.555 ************************************ 00:04:15.555 END TEST event_scheduler 00:04:15.555 ************************************ 00:04:15.555 10:17:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:15.555 10:17:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:15.555 10:17:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.555 10:17:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.555 10:17:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.555 ************************************ 00:04:15.555 START TEST app_repeat 00:04:15.555 ************************************ 00:04:15.555 10:17:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1330611 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1330611' 00:04:15.555 Process app_repeat pid: 1330611 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:15.555 spdk_app_start Round 0 00:04:15.555 10:17:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1330611 /var/tmp/spdk-nbd.sock 00:04:15.555 10:17:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1330611 ']' 00:04:15.555 10:17:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:15.555 10:17:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.555 10:17:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:15.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:15.555 10:17:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.555 10:17:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:15.555 [2024-12-12 10:17:49.568156] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:15.555 [2024-12-12 10:17:49.568206] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330611 ] 00:04:15.814 [2024-12-12 10:17:49.641837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.814 [2024-12-12 10:17:49.682728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.814 [2024-12-12 10:17:49.682728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.814 10:17:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.814 10:17:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:15.814 10:17:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.072 Malloc0 00:04:16.073 10:17:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.333 Malloc1 00:04:16.333 10:17:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.333 10:17:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:16.617 /dev/nbd0 00:04:16.617 10:17:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:16.617 10:17:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.617 1+0 records in 00:04:16.617 1+0 records out 00:04:16.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196181 s, 20.9 MB/s 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:16.617 10:17:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:16.617 10:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.617 10:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.617 10:17:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:16.888 /dev/nbd1 00:04:16.888 10:17:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:16.888 10:17:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.888 1+0 records in 00:04:16.888 1+0 records out 00:04:16.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240596 s, 17.0 MB/s 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:16.888 10:17:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:16.888 10:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.888 10:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.888 10:17:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.888 10:17:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.888 10:17:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:17.146 { 00:04:17.146 "nbd_device": "/dev/nbd0", 00:04:17.146 "bdev_name": "Malloc0" 00:04:17.146 }, 00:04:17.146 { 00:04:17.146 "nbd_device": "/dev/nbd1", 00:04:17.146 "bdev_name": "Malloc1" 00:04:17.146 } 00:04:17.146 ]' 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:17.146 { 00:04:17.146 "nbd_device": "/dev/nbd0", 00:04:17.146 "bdev_name": "Malloc0" 00:04:17.146 }, 00:04:17.146 { 00:04:17.146 "nbd_device": "/dev/nbd1", 00:04:17.146 "bdev_name": "Malloc1" 00:04:17.146 } 00:04:17.146 ]' 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:17.146 /dev/nbd1' 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:17.146 /dev/nbd1' 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.146 10:17:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:17.147 10:17:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.147 10:17:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:17.147 10:17:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:17.147 256+0 records in 00:04:17.147 256+0 records out 00:04:17.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00713142 s, 147 MB/s 00:04:17.147 10:17:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.147 10:17:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:17.147 256+0 records in 00:04:17.147 256+0 records out 00:04:17.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143246 s, 73.2 MB/s 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:17.147 256+0 records in 00:04:17.147 256+0 records out 00:04:17.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015018 s, 69.8 MB/s 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.147 10:17:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.405 10:17:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.664 10:17:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.923 10:17:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:17.923 10:17:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:17.923 10:17:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.923 10:17:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:17.923 10:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:17.923 10:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.923 10:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:17.923 10:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:17.923 10:17:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:17.924 10:17:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:17.924 10:17:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:17.924 10:17:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:17.924 10:17:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:18.182 10:17:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:18.182 [2024-12-12 10:17:52.105071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.182 [2024-12-12 10:17:52.141319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.182 [2024-12-12 10:17:52.141319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.182 [2024-12-12 10:17:52.181917] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:18.182 [2024-12-12 10:17:52.181958] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:21.467 10:17:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:21.467 10:17:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:21.467 spdk_app_start Round 1 00:04:21.467 10:17:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1330611 /var/tmp/spdk-nbd.sock 00:04:21.467 10:17:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1330611 ']' 00:04:21.467 10:17:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:21.467 10:17:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.467 10:17:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:21.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:21.467 10:17:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.467 10:17:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:21.467 10:17:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.467 10:17:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:21.467 10:17:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.467 Malloc0 00:04:21.467 10:17:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.726 Malloc1 00:04:21.726 10:17:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.726 10:17:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:21.984 /dev/nbd0 00:04:21.984 10:17:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:21.984 10:17:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.984 1+0 records in 00:04:21.984 1+0 records out 00:04:21.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197065 s, 20.8 MB/s 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:21.984 10:17:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:21.984 10:17:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.984 10:17:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.984 10:17:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:22.242 /dev/nbd1 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.242 1+0 records in 00:04:22.242 1+0 records out 00:04:22.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154597 s, 26.5 MB/s 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:22.242 10:17:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:22.242 { 00:04:22.242 "nbd_device": "/dev/nbd0", 00:04:22.242 "bdev_name": "Malloc0" 00:04:22.242 }, 00:04:22.242 { 00:04:22.242 "nbd_device": "/dev/nbd1", 00:04:22.242 "bdev_name": "Malloc1" 00:04:22.242 } 00:04:22.242 ]' 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:22.242 { 00:04:22.242 "nbd_device": "/dev/nbd0", 00:04:22.242 "bdev_name": "Malloc0" 00:04:22.242 }, 00:04:22.242 { 00:04:22.242 "nbd_device": "/dev/nbd1", 00:04:22.242 "bdev_name": "Malloc1" 00:04:22.242 } 00:04:22.242 ]' 00:04:22.242 10:17:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:22.501 /dev/nbd1' 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:22.501 /dev/nbd1' 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:22.501 256+0 records in 00:04:22.501 256+0 records out 00:04:22.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010681 s, 98.2 MB/s 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:22.501 256+0 records in 00:04:22.501 256+0 records out 00:04:22.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140693 s, 74.5 MB/s 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:22.501 256+0 records in 00:04:22.501 256+0 records out 00:04:22.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149762 s, 70.0 MB/s 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:22.501 10:17:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:22.759 10:17:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.018 10:17:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.018 10:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:23.018 10:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:23.018 10:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.276 10:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:23.276 10:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:23.276 10:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.276 10:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:23.276 10:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:23.276 10:17:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:23.276 10:17:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:23.276 10:17:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:23.276 10:17:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:23.276 10:17:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:23.276 10:17:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:23.534 [2024-12-12 10:17:57.406629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.534 [2024-12-12 10:17:57.442619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.534 [2024-12-12 10:17:57.442621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.534 [2024-12-12 10:17:57.483747] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:23.534 [2024-12-12 10:17:57.483786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:26.826 10:18:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:26.826 10:18:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:26.826 spdk_app_start Round 2 00:04:26.826 10:18:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1330611 /var/tmp/spdk-nbd.sock 00:04:26.826 10:18:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1330611 ']' 00:04:26.826 10:18:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:26.826 10:18:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.826 10:18:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:26.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:26.826 10:18:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.826 10:18:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.826 10:18:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.826 10:18:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:26.826 10:18:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.826 Malloc0 00:04:26.826 10:18:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.085 Malloc1 00:04:27.085 10:18:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.085 10:18:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:27.344 /dev/nbd0 00:04:27.344 10:18:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:27.344 10:18:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.344 1+0 records in 00:04:27.344 1+0 records out 00:04:27.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225935 s, 18.1 MB/s 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:27.344 10:18:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:27.344 10:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.344 10:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.344 10:18:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:27.602 /dev/nbd1 00:04:27.602 10:18:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:27.602 10:18:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.602 1+0 records in 00:04:27.602 1+0 records out 00:04:27.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245778 s, 16.7 MB/s 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:27.602 10:18:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:27.602 10:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.602 10:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.602 10:18:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.602 10:18:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.602 10:18:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:27.861 { 00:04:27.861 "nbd_device": "/dev/nbd0", 00:04:27.861 "bdev_name": "Malloc0" 00:04:27.861 }, 00:04:27.861 { 00:04:27.861 "nbd_device": "/dev/nbd1", 00:04:27.861 "bdev_name": "Malloc1" 00:04:27.861 } 00:04:27.861 ]' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:27.861 { 00:04:27.861 "nbd_device": "/dev/nbd0", 00:04:27.861 "bdev_name": "Malloc0" 00:04:27.861 }, 00:04:27.861 { 00:04:27.861 "nbd_device": "/dev/nbd1", 00:04:27.861 "bdev_name": "Malloc1" 00:04:27.861 } 00:04:27.861 ]' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:27.861 /dev/nbd1' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:27.861 /dev/nbd1' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:27.861 256+0 records in 00:04:27.861 256+0 records out 00:04:27.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101942 s, 103 MB/s 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:27.861 256+0 records in 00:04:27.861 256+0 records out 00:04:27.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140608 s, 74.6 MB/s 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:27.861 256+0 records in 00:04:27.861 256+0 records out 00:04:27.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153752 s, 68.2 MB/s 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.861 10:18:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.120 10:18:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.379 10:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:28.638 10:18:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:28.638 10:18:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:28.897 10:18:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:28.897 [2024-12-12 10:18:02.811923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.897 [2024-12-12 10:18:02.850098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.897 [2024-12-12 10:18:02.850099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.897 [2024-12-12 10:18:02.890898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.897 [2024-12-12 10:18:02.890938] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:32.184 10:18:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1330611 /var/tmp/spdk-nbd.sock 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1330611 ']' 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:32.184 10:18:05 event.app_repeat -- event/event.sh@39 -- # killprocess 1330611 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1330611 ']' 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1330611 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1330611 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1330611' 00:04:32.184 killing process with pid 1330611 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1330611 00:04:32.184 10:18:05 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1330611 00:04:32.184 spdk_app_start is called in Round 0. 00:04:32.184 Shutdown signal received, stop current app iteration 00:04:32.184 Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 reinitialization... 00:04:32.184 spdk_app_start is called in Round 1. 00:04:32.184 Shutdown signal received, stop current app iteration 00:04:32.184 Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 reinitialization... 00:04:32.184 spdk_app_start is called in Round 2. 00:04:32.184 Shutdown signal received, stop current app iteration 00:04:32.184 Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 reinitialization... 00:04:32.184 spdk_app_start is called in Round 3. 00:04:32.184 Shutdown signal received, stop current app iteration 00:04:32.184 10:18:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:32.184 10:18:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:32.184 00:04:32.184 real 0m16.527s 00:04:32.184 user 0m36.406s 00:04:32.184 sys 0m2.562s 00:04:32.184 10:18:06 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.184 10:18:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.184 ************************************ 00:04:32.184 END TEST app_repeat 00:04:32.184 ************************************ 00:04:32.184 10:18:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:32.185 10:18:06 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:32.185 10:18:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.185 10:18:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.185 10:18:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.185 ************************************ 00:04:32.185 START TEST cpu_locks 00:04:32.185 ************************************ 00:04:32.185 10:18:06 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:32.444 * Looking for test storage... 00:04:32.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.444 10:18:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.444 --rc genhtml_branch_coverage=1 00:04:32.444 --rc genhtml_function_coverage=1 00:04:32.444 --rc genhtml_legend=1 00:04:32.444 --rc geninfo_all_blocks=1 00:04:32.444 --rc geninfo_unexecuted_blocks=1 00:04:32.444 00:04:32.444 ' 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.444 --rc genhtml_branch_coverage=1 00:04:32.444 --rc genhtml_function_coverage=1 00:04:32.444 --rc genhtml_legend=1 00:04:32.444 --rc geninfo_all_blocks=1 00:04:32.444 --rc geninfo_unexecuted_blocks=1 00:04:32.444 00:04:32.444 ' 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.444 --rc genhtml_branch_coverage=1 00:04:32.444 --rc genhtml_function_coverage=1 00:04:32.444 --rc genhtml_legend=1 00:04:32.444 --rc geninfo_all_blocks=1 00:04:32.444 --rc geninfo_unexecuted_blocks=1 00:04:32.444 00:04:32.444 ' 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.444 --rc genhtml_branch_coverage=1 00:04:32.444 --rc genhtml_function_coverage=1 00:04:32.444 --rc genhtml_legend=1 00:04:32.444 --rc geninfo_all_blocks=1 00:04:32.444 --rc geninfo_unexecuted_blocks=1 00:04:32.444 00:04:32.444 ' 00:04:32.444 10:18:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:32.444 10:18:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:32.444 10:18:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:32.444 10:18:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.444 10:18:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.444 ************************************ 00:04:32.444 START TEST default_locks 00:04:32.444 ************************************ 00:04:32.444 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:32.444 10:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1333791 00:04:32.444 10:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1333791 00:04:32.445 10:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.445 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1333791 ']' 00:04:32.445 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.445 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.445 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.445 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.445 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.445 [2024-12-12 10:18:06.397540] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:32.445 [2024-12-12 10:18:06.397589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333791 ] 00:04:32.703 [2024-12-12 10:18:06.473504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.703 [2024-12-12 10:18:06.515388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1333791 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1333791 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.962 lslocks: write error 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1333791 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1333791 ']' 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1333791 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1333791 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1333791' 00:04:32.962 killing process with pid 1333791 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1333791 00:04:32.962 10:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1333791 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1333791 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1333791 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1333791 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1333791 ']' 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1333791) - No such process 00:04:33.221 ERROR: process (pid: 1333791) is no longer running 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:33.221 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:33.480 10:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:33.480 10:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:33.480 10:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:33.480 10:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:33.480 00:04:33.480 real 0m0.901s 00:04:33.480 user 0m0.831s 00:04:33.480 sys 0m0.427s 00:04:33.480 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.480 10:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.480 ************************************ 00:04:33.480 END TEST default_locks 00:04:33.480 ************************************ 00:04:33.480 10:18:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:33.480 10:18:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.480 10:18:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.480 10:18:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.480 ************************************ 00:04:33.480 START TEST default_locks_via_rpc 00:04:33.480 ************************************ 00:04:33.480 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:33.480 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1333917 00:04:33.480 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1333917 00:04:33.480 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.480 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1333917 ']' 00:04:33.480 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.480 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.480 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.481 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.481 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.481 [2024-12-12 10:18:07.364559] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:33.481 [2024-12-12 10:18:07.364621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333917 ] 00:04:33.481 [2024-12-12 10:18:07.437225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.481 [2024-12-12 10:18:07.479409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1333917 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.740 10:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1333917 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1333917 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1333917 ']' 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1333917 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1333917 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1333917' 00:04:34.307 killing process with pid 1333917 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1333917 00:04:34.307 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1333917 00:04:34.566 00:04:34.566 real 0m1.176s 00:04:34.566 user 0m1.137s 00:04:34.566 sys 0m0.525s 00:04:34.566 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.566 10:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.566 ************************************ 00:04:34.566 END TEST default_locks_via_rpc 00:04:34.566 ************************************ 00:04:34.566 10:18:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:34.566 10:18:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.566 10:18:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.566 10:18:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.566 ************************************ 00:04:34.566 START TEST non_locking_app_on_locked_coremask 00:04:34.566 ************************************ 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1334199 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1334199 /var/tmp/spdk.sock 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1334199 ']' 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.566 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.825 [2024-12-12 10:18:08.607075] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:34.825 [2024-12-12 10:18:08.607121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334199 ] 00:04:34.825 [2024-12-12 10:18:08.680654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.825 [2024-12-12 10:18:08.722232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1334393 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1334393 /var/tmp/spdk2.sock 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1334393 ']' 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.085 10:18:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.085 [2024-12-12 10:18:08.989298] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:35.085 [2024-12-12 10:18:08.989347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334393 ] 00:04:35.085 [2024-12-12 10:18:09.079736] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:35.085 [2024-12-12 10:18:09.079764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.344 [2024-12-12 10:18:09.168912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.911 10:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.911 10:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:35.911 10:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1334199 00:04:35.911 10:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1334199 00:04:35.911 10:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.479 lslocks: write error 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1334199 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1334199 ']' 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1334199 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1334199 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1334199' 00:04:36.479 killing process with pid 1334199 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1334199 00:04:36.479 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1334199 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1334393 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1334393 ']' 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1334393 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1334393 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1334393' 00:04:37.046 killing process with pid 1334393 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1334393 00:04:37.046 10:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1334393 00:04:37.305 00:04:37.305 real 0m2.682s 00:04:37.305 user 0m2.831s 00:04:37.305 sys 0m0.870s 00:04:37.305 10:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.305 10:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.305 ************************************ 00:04:37.305 END TEST non_locking_app_on_locked_coremask 00:04:37.305 ************************************ 00:04:37.305 10:18:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:37.305 10:18:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.305 10:18:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.305 10:18:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.305 ************************************ 00:04:37.305 START TEST locking_app_on_unlocked_coremask 00:04:37.305 ************************************ 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1335058 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1335058 /var/tmp/spdk.sock 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1335058 ']' 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.305 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.565 [2024-12-12 10:18:11.360165] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:37.565 [2024-12-12 10:18:11.360206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335058 ] 00:04:37.565 [2024-12-12 10:18:11.432274] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.565 [2024-12-12 10:18:11.432304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.565 [2024-12-12 10:18:11.469852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1335168 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1335168 /var/tmp/spdk2.sock 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1335168 ']' 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.824 10:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.824 [2024-12-12 10:18:11.746552] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:37.824 [2024-12-12 10:18:11.746608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335168 ] 00:04:37.824 [2024-12-12 10:18:11.837712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.083 [2024-12-12 10:18:11.921361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.650 10:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.650 10:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.650 10:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1335168 00:04:38.650 10:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1335168 00:04:38.650 10:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.218 lslocks: write error 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1335058 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1335058 ']' 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1335058 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1335058 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1335058' 00:04:39.218 killing process with pid 1335058 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1335058 00:04:39.218 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1335058 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1335168 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1335168 ']' 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1335168 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1335168 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1335168' 00:04:39.786 killing process with pid 1335168 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1335168 00:04:39.786 10:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1335168 00:04:40.046 00:04:40.046 real 0m2.745s 00:04:40.046 user 0m2.889s 00:04:40.046 sys 0m0.915s 00:04:40.046 10:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.046 10:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.046 ************************************ 00:04:40.046 END TEST locking_app_on_unlocked_coremask 00:04:40.046 ************************************ 00:04:40.305 10:18:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:40.305 10:18:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.305 10:18:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.305 10:18:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.305 ************************************ 00:04:40.305 START TEST locking_app_on_locked_coremask 00:04:40.305 ************************************ 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1335546 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1335546 /var/tmp/spdk.sock 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1335546 ']' 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.305 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.305 [2024-12-12 10:18:14.175050] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:40.305 [2024-12-12 10:18:14.175090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335546 ] 00:04:40.305 [2024-12-12 10:18:14.247927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.305 [2024-12-12 10:18:14.287477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1335728 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1335728 /var/tmp/spdk2.sock 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1335728 /var/tmp/spdk2.sock 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1335728 /var/tmp/spdk2.sock 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1335728 ']' 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.565 10:18:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.565 [2024-12-12 10:18:14.568174] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:40.565 [2024-12-12 10:18:14.568220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335728 ] 00:04:40.823 [2024-12-12 10:18:14.652382] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1335546 has claimed it. 00:04:40.823 [2024-12-12 10:18:14.652416] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:41.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1335728) - No such process 00:04:41.390 ERROR: process (pid: 1335728) is no longer running 00:04:41.390 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.390 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:41.390 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:41.390 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.390 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.390 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.390 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1335546 00:04:41.390 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1335546 00:04:41.390 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.648 lslocks: write error 00:04:41.648 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1335546 00:04:41.648 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1335546 ']' 00:04:41.648 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1335546 00:04:41.648 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:41.907 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.907 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1335546 00:04:41.907 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.907 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.907 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1335546' 00:04:41.907 killing process with pid 1335546 00:04:41.907 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1335546 00:04:41.907 10:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1335546 00:04:42.166 00:04:42.166 real 0m1.898s 00:04:42.166 user 0m2.036s 00:04:42.166 sys 0m0.637s 00:04:42.166 10:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.166 10:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.166 ************************************ 00:04:42.166 END TEST locking_app_on_locked_coremask 00:04:42.166 ************************************ 00:04:42.166 10:18:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:42.166 10:18:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.166 10:18:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.166 10:18:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.166 ************************************ 00:04:42.166 START TEST locking_overlapped_coremask 00:04:42.166 ************************************ 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1336020 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1336020 /var/tmp/spdk.sock 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1336020 ']' 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.166 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.166 [2024-12-12 10:18:16.141543] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:42.166 [2024-12-12 10:18:16.141592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336020 ] 00:04:42.425 [2024-12-12 10:18:16.213677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:42.425 [2024-12-12 10:18:16.257410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.425 [2024-12-12 10:18:16.257521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.425 [2024-12-12 10:18:16.257521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1336025 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1336025 /var/tmp/spdk2.sock 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1336025 /var/tmp/spdk2.sock 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1336025 /var/tmp/spdk2.sock 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1336025 ']' 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.685 10:18:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.685 [2024-12-12 10:18:16.520416] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:42.685 [2024-12-12 10:18:16.520455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336025 ] 00:04:42.685 [2024-12-12 10:18:16.610182] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1336020 has claimed it. 00:04:42.685 [2024-12-12 10:18:16.610219] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:43.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1336025) - No such process 00:04:43.253 ERROR: process (pid: 1336025) is no longer running 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1336020 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1336020 ']' 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1336020 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1336020 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1336020' 00:04:43.253 killing process with pid 1336020 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1336020 00:04:43.253 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1336020 00:04:43.512 00:04:43.512 real 0m1.433s 00:04:43.512 user 0m3.961s 00:04:43.512 sys 0m0.392s 00:04:43.512 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.512 10:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.512 ************************************ 00:04:43.512 END TEST locking_overlapped_coremask 00:04:43.512 ************************************ 00:04:43.772 10:18:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:43.772 10:18:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.772 10:18:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.772 10:18:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.772 ************************************ 00:04:43.772 START TEST locking_overlapped_coremask_via_rpc 00:04:43.772 ************************************ 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1336275 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1336275 /var/tmp/spdk.sock 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1336275 ']' 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.772 10:18:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.772 [2024-12-12 10:18:17.645048] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:43.772 [2024-12-12 10:18:17.645093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336275 ] 00:04:43.772 [2024-12-12 10:18:17.718328] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.772 [2024-12-12 10:18:17.718354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.772 [2024-12-12 10:18:17.761250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.772 [2024-12-12 10:18:17.761362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.772 [2024-12-12 10:18:17.761362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1336410 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1336410 /var/tmp/spdk2.sock 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1336410 ']' 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.709 10:18:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.709 [2024-12-12 10:18:18.535555] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:44.709 [2024-12-12 10:18:18.535614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336410 ] 00:04:44.709 [2024-12-12 10:18:18.627672] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.709 [2024-12-12 10:18:18.627706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:44.709 [2024-12-12 10:18:18.710583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.709 [2024-12-12 10:18:18.713614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.709 [2024-12-12 10:18:18.713615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.646 [2024-12-12 10:18:19.402636] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1336275 has claimed it. 00:04:45.646 request: 00:04:45.646 { 00:04:45.646 "method": "framework_enable_cpumask_locks", 00:04:45.646 "req_id": 1 00:04:45.646 } 00:04:45.646 Got JSON-RPC error response 00:04:45.646 response: 00:04:45.646 { 00:04:45.646 "code": -32603, 00:04:45.646 "message": "Failed to claim CPU core: 2" 00:04:45.646 } 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1336275 /var/tmp/spdk.sock 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1336275 ']' 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1336410 /var/tmp/spdk2.sock 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1336410 ']' 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.646 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.905 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.905 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.905 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:45.905 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:45.905 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:45.905 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:45.905 00:04:45.905 real 0m2.228s 00:04:45.905 user 0m1.016s 00:04:45.905 sys 0m0.146s 00:04:45.905 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.905 10:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.905 ************************************ 00:04:45.905 END TEST locking_overlapped_coremask_via_rpc 00:04:45.905 ************************************ 00:04:45.905 10:18:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:45.905 10:18:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1336275 ]] 00:04:45.905 10:18:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1336275 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1336275 ']' 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1336275 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1336275 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1336275' 00:04:45.905 killing process with pid 1336275 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1336275 00:04:45.905 10:18:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1336275 00:04:46.473 10:18:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1336410 ]] 00:04:46.473 10:18:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1336410 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1336410 ']' 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1336410 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1336410 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1336410' 00:04:46.473 killing process with pid 1336410 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1336410 00:04:46.473 10:18:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1336410 00:04:46.733 10:18:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:46.733 10:18:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:46.733 10:18:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1336275 ]] 00:04:46.733 10:18:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1336275 00:04:46.733 10:18:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1336275 ']' 00:04:46.733 10:18:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1336275 00:04:46.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1336275) - No such process 00:04:46.733 10:18:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1336275 is not found' 00:04:46.733 Process with pid 1336275 is not found 00:04:46.733 10:18:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1336410 ]] 00:04:46.733 10:18:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1336410 00:04:46.733 10:18:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1336410 ']' 00:04:46.733 10:18:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1336410 00:04:46.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1336410) - No such process 00:04:46.733 10:18:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1336410 is not found' 00:04:46.733 Process with pid 1336410 is not found 00:04:46.733 10:18:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:46.733 00:04:46.733 real 0m14.460s 00:04:46.733 user 0m26.038s 00:04:46.733 sys 0m4.885s 00:04:46.733 10:18:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.733 10:18:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.733 ************************************ 00:04:46.733 END TEST cpu_locks 00:04:46.733 ************************************ 00:04:46.733 00:04:46.733 real 0m39.278s 00:04:46.733 user 1m15.624s 00:04:46.733 sys 0m8.452s 00:04:46.733 10:18:20 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.733 10:18:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.733 ************************************ 00:04:46.733 END TEST event 00:04:46.733 ************************************ 00:04:46.733 10:18:20 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.733 10:18:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.733 10:18:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.733 10:18:20 -- common/autotest_common.sh@10 -- # set +x 00:04:46.733 ************************************ 00:04:46.733 START TEST thread 00:04:46.733 ************************************ 00:04:46.733 10:18:20 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.993 * Looking for test storage... 00:04:46.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.993 10:18:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.993 10:18:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.993 10:18:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.993 10:18:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.993 10:18:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.993 10:18:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.993 10:18:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.993 10:18:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.993 10:18:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.993 10:18:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.993 10:18:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.993 10:18:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:46.993 10:18:20 thread -- scripts/common.sh@345 -- # : 1 00:04:46.993 10:18:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.993 10:18:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.993 10:18:20 thread -- scripts/common.sh@365 -- # decimal 1 00:04:46.993 10:18:20 thread -- scripts/common.sh@353 -- # local d=1 00:04:46.993 10:18:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.993 10:18:20 thread -- scripts/common.sh@355 -- # echo 1 00:04:46.993 10:18:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.993 10:18:20 thread -- scripts/common.sh@366 -- # decimal 2 00:04:46.993 10:18:20 thread -- scripts/common.sh@353 -- # local d=2 00:04:46.993 10:18:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.993 10:18:20 thread -- scripts/common.sh@355 -- # echo 2 00:04:46.993 10:18:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.993 10:18:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.993 10:18:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.993 10:18:20 thread -- scripts/common.sh@368 -- # return 0 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.993 --rc genhtml_branch_coverage=1 00:04:46.993 --rc genhtml_function_coverage=1 00:04:46.993 --rc genhtml_legend=1 00:04:46.993 --rc geninfo_all_blocks=1 00:04:46.993 --rc geninfo_unexecuted_blocks=1 00:04:46.993 00:04:46.993 ' 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.993 --rc genhtml_branch_coverage=1 00:04:46.993 --rc genhtml_function_coverage=1 00:04:46.993 --rc genhtml_legend=1 00:04:46.993 --rc geninfo_all_blocks=1 00:04:46.993 --rc geninfo_unexecuted_blocks=1 00:04:46.993 00:04:46.993 ' 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.993 --rc genhtml_branch_coverage=1 00:04:46.993 --rc genhtml_function_coverage=1 00:04:46.993 --rc genhtml_legend=1 00:04:46.993 --rc geninfo_all_blocks=1 00:04:46.993 --rc geninfo_unexecuted_blocks=1 00:04:46.993 00:04:46.993 ' 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.993 --rc genhtml_branch_coverage=1 00:04:46.993 --rc genhtml_function_coverage=1 00:04:46.993 --rc genhtml_legend=1 00:04:46.993 --rc geninfo_all_blocks=1 00:04:46.993 --rc geninfo_unexecuted_blocks=1 00:04:46.993 00:04:46.993 ' 00:04:46.993 10:18:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.993 10:18:20 thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.993 ************************************ 00:04:46.993 START TEST thread_poller_perf 00:04:46.993 ************************************ 00:04:46.993 10:18:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:46.993 [2024-12-12 10:18:20.927807] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:46.993 [2024-12-12 10:18:20.927872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336851 ] 00:04:46.993 [2024-12-12 10:18:21.005821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.251 [2024-12-12 10:18:21.046121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.251 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:48.188 [2024-12-12T09:18:22.211Z] ====================================== 00:04:48.188 [2024-12-12T09:18:22.211Z] busy:2104284764 (cyc) 00:04:48.188 [2024-12-12T09:18:22.211Z] total_run_count: 424000 00:04:48.188 [2024-12-12T09:18:22.211Z] tsc_hz: 2100000000 (cyc) 00:04:48.188 [2024-12-12T09:18:22.211Z] ====================================== 00:04:48.188 [2024-12-12T09:18:22.211Z] poller_cost: 4962 (cyc), 2362 (nsec) 00:04:48.188 00:04:48.188 real 0m1.182s 00:04:48.188 user 0m1.101s 00:04:48.188 sys 0m0.076s 00:04:48.188 10:18:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.188 10:18:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.188 ************************************ 00:04:48.188 END TEST thread_poller_perf 00:04:48.188 ************************************ 00:04:48.188 10:18:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:48.188 10:18:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:48.188 10:18:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.188 10:18:22 thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.188 ************************************ 00:04:48.188 START TEST thread_poller_perf 00:04:48.188 ************************************ 00:04:48.188 10:18:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:48.188 [2024-12-12 10:18:22.179976] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:48.188 [2024-12-12 10:18:22.180041] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337093 ] 00:04:48.446 [2024-12-12 10:18:22.259779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.446 [2024-12-12 10:18:22.299164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.446 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:49.382 [2024-12-12T09:18:23.406Z] ====================================== 00:04:49.383 [2024-12-12T09:18:23.406Z] busy:2101614352 (cyc) 00:04:49.383 [2024-12-12T09:18:23.406Z] total_run_count: 5088000 00:04:49.383 [2024-12-12T09:18:23.406Z] tsc_hz: 2100000000 (cyc) 00:04:49.383 [2024-12-12T09:18:23.406Z] ====================================== 00:04:49.383 [2024-12-12T09:18:23.406Z] poller_cost: 413 (cyc), 196 (nsec) 00:04:49.383 00:04:49.383 real 0m1.181s 00:04:49.383 user 0m1.102s 00:04:49.383 sys 0m0.075s 00:04:49.383 10:18:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.383 10:18:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.383 ************************************ 00:04:49.383 END TEST thread_poller_perf 00:04:49.383 ************************************ 00:04:49.383 10:18:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:49.383 00:04:49.383 real 0m2.676s 00:04:49.383 user 0m2.368s 00:04:49.383 sys 0m0.323s 00:04:49.383 10:18:23 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.383 10:18:23 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.383 ************************************ 00:04:49.383 END TEST thread 00:04:49.383 ************************************ 00:04:49.642 10:18:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:49.642 10:18:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:49.642 10:18:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.642 10:18:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.642 10:18:23 -- common/autotest_common.sh@10 -- # set +x 00:04:49.642 ************************************ 00:04:49.642 START TEST app_cmdline 00:04:49.642 ************************************ 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:49.642 * Looking for test storage... 00:04:49.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.642 10:18:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.642 --rc genhtml_branch_coverage=1 00:04:49.642 --rc genhtml_function_coverage=1 00:04:49.642 --rc genhtml_legend=1 00:04:49.642 --rc geninfo_all_blocks=1 00:04:49.642 --rc geninfo_unexecuted_blocks=1 00:04:49.642 00:04:49.642 ' 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.642 --rc genhtml_branch_coverage=1 00:04:49.642 --rc genhtml_function_coverage=1 00:04:49.642 --rc genhtml_legend=1 00:04:49.642 --rc geninfo_all_blocks=1 00:04:49.642 --rc geninfo_unexecuted_blocks=1 00:04:49.642 00:04:49.642 ' 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.642 --rc genhtml_branch_coverage=1 00:04:49.642 --rc genhtml_function_coverage=1 00:04:49.642 --rc genhtml_legend=1 00:04:49.642 --rc geninfo_all_blocks=1 00:04:49.642 --rc geninfo_unexecuted_blocks=1 00:04:49.642 00:04:49.642 ' 00:04:49.642 10:18:23 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.642 --rc genhtml_branch_coverage=1 00:04:49.643 --rc genhtml_function_coverage=1 00:04:49.643 --rc genhtml_legend=1 00:04:49.643 --rc geninfo_all_blocks=1 00:04:49.643 --rc geninfo_unexecuted_blocks=1 00:04:49.643 00:04:49.643 ' 00:04:49.643 10:18:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:49.643 10:18:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1337387 00:04:49.643 10:18:23 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:49.643 10:18:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1337387 00:04:49.643 10:18:23 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1337387 ']' 00:04:49.643 10:18:23 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.643 10:18:23 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.643 10:18:23 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.643 10:18:23 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.643 10:18:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:49.902 [2024-12-12 10:18:23.671463] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:49.902 [2024-12-12 10:18:23.671511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337387 ] 00:04:49.902 [2024-12-12 10:18:23.747209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.902 [2024-12-12 10:18:23.788877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.182 10:18:24 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.182 10:18:24 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:50.182 10:18:24 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:50.182 { 00:04:50.182 "version": "SPDK v25.01-pre git sha1 b9cf27559", 00:04:50.182 "fields": { 00:04:50.182 "major": 25, 00:04:50.182 "minor": 1, 00:04:50.182 "patch": 0, 00:04:50.182 "suffix": "-pre", 00:04:50.182 "commit": "b9cf27559" 00:04:50.182 } 00:04:50.182 } 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:50.469 request: 00:04:50.469 { 00:04:50.469 "method": "env_dpdk_get_mem_stats", 00:04:50.469 "req_id": 1 00:04:50.469 } 00:04:50.469 Got JSON-RPC error response 00:04:50.469 response: 00:04:50.469 { 00:04:50.469 "code": -32601, 00:04:50.469 "message": "Method not found" 00:04:50.469 } 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.469 10:18:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1337387 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1337387 ']' 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1337387 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1337387 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1337387' 00:04:50.469 killing process with pid 1337387 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@973 -- # kill 1337387 00:04:50.469 10:18:24 app_cmdline -- common/autotest_common.sh@978 -- # wait 1337387 00:04:51.067 00:04:51.067 real 0m1.340s 00:04:51.067 user 0m1.548s 00:04:51.067 sys 0m0.457s 00:04:51.067 10:18:24 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.067 10:18:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:51.067 ************************************ 00:04:51.067 END TEST app_cmdline 00:04:51.067 ************************************ 00:04:51.067 10:18:24 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:51.067 10:18:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.067 10:18:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.067 10:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:51.067 ************************************ 00:04:51.067 START TEST version 00:04:51.067 ************************************ 00:04:51.067 10:18:24 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:51.067 * Looking for test storage... 00:04:51.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:51.067 10:18:24 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.067 10:18:24 version -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.067 10:18:24 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.067 10:18:25 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.067 10:18:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.067 10:18:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.067 10:18:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.067 10:18:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.067 10:18:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.067 10:18:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.067 10:18:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.067 10:18:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.067 10:18:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.067 10:18:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.067 10:18:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.067 10:18:25 version -- scripts/common.sh@344 -- # case "$op" in 00:04:51.067 10:18:25 version -- scripts/common.sh@345 -- # : 1 00:04:51.067 10:18:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.067 10:18:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.067 10:18:25 version -- scripts/common.sh@365 -- # decimal 1 00:04:51.067 10:18:25 version -- scripts/common.sh@353 -- # local d=1 00:04:51.067 10:18:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.067 10:18:25 version -- scripts/common.sh@355 -- # echo 1 00:04:51.067 10:18:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.067 10:18:25 version -- scripts/common.sh@366 -- # decimal 2 00:04:51.067 10:18:25 version -- scripts/common.sh@353 -- # local d=2 00:04:51.067 10:18:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.067 10:18:25 version -- scripts/common.sh@355 -- # echo 2 00:04:51.067 10:18:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.067 10:18:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.067 10:18:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.067 10:18:25 version -- scripts/common.sh@368 -- # return 0 00:04:51.067 10:18:25 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.067 10:18:25 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.067 --rc genhtml_branch_coverage=1 00:04:51.067 --rc genhtml_function_coverage=1 00:04:51.067 --rc genhtml_legend=1 00:04:51.067 --rc geninfo_all_blocks=1 00:04:51.067 --rc geninfo_unexecuted_blocks=1 00:04:51.067 00:04:51.067 ' 00:04:51.067 10:18:25 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.067 --rc genhtml_branch_coverage=1 00:04:51.067 --rc genhtml_function_coverage=1 00:04:51.067 --rc genhtml_legend=1 00:04:51.067 --rc geninfo_all_blocks=1 00:04:51.067 --rc geninfo_unexecuted_blocks=1 00:04:51.067 00:04:51.068 ' 00:04:51.068 10:18:25 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.068 --rc genhtml_branch_coverage=1 00:04:51.068 --rc genhtml_function_coverage=1 00:04:51.068 --rc genhtml_legend=1 00:04:51.068 --rc geninfo_all_blocks=1 00:04:51.068 --rc geninfo_unexecuted_blocks=1 00:04:51.068 00:04:51.068 ' 00:04:51.068 10:18:25 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.068 --rc genhtml_branch_coverage=1 00:04:51.068 --rc genhtml_function_coverage=1 00:04:51.068 --rc genhtml_legend=1 00:04:51.068 --rc geninfo_all_blocks=1 00:04:51.068 --rc geninfo_unexecuted_blocks=1 00:04:51.068 00:04:51.068 ' 00:04:51.068 10:18:25 version -- app/version.sh@17 -- # get_header_version major 00:04:51.068 10:18:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:51.068 10:18:25 version -- app/version.sh@14 -- # cut -f2 00:04:51.068 10:18:25 version -- app/version.sh@14 -- # tr -d '"' 00:04:51.068 10:18:25 version -- app/version.sh@17 -- # major=25 00:04:51.068 10:18:25 version -- app/version.sh@18 -- # get_header_version minor 00:04:51.068 10:18:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:51.068 10:18:25 version -- app/version.sh@14 -- # cut -f2 00:04:51.068 10:18:25 version -- app/version.sh@14 -- # tr -d '"' 00:04:51.068 10:18:25 version -- app/version.sh@18 -- # minor=1 00:04:51.068 10:18:25 version -- app/version.sh@19 -- # get_header_version patch 00:04:51.068 10:18:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:51.068 10:18:25 version -- app/version.sh@14 -- # cut -f2 00:04:51.068 10:18:25 version -- app/version.sh@14 -- # tr -d '"' 00:04:51.068 10:18:25 version -- app/version.sh@19 -- # patch=0 00:04:51.068 10:18:25 version -- app/version.sh@20 -- # get_header_version suffix 00:04:51.068 10:18:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:51.068 10:18:25 version -- app/version.sh@14 -- # cut -f2 00:04:51.068 10:18:25 version -- app/version.sh@14 -- # tr -d '"' 00:04:51.068 10:18:25 version -- app/version.sh@20 -- # suffix=-pre 00:04:51.068 10:18:25 version -- app/version.sh@22 -- # version=25.1 00:04:51.068 10:18:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:51.068 10:18:25 version -- app/version.sh@28 -- # version=25.1rc0 00:04:51.068 10:18:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:51.068 10:18:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:51.326 10:18:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:51.326 10:18:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:51.326 00:04:51.326 real 0m0.245s 00:04:51.326 user 0m0.159s 00:04:51.326 sys 0m0.130s 00:04:51.326 10:18:25 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.326 10:18:25 version -- common/autotest_common.sh@10 -- # set +x 00:04:51.326 ************************************ 00:04:51.326 END TEST version 00:04:51.326 ************************************ 00:04:51.326 10:18:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:51.326 10:18:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:51.326 10:18:25 -- spdk/autotest.sh@194 -- # uname -s 00:04:51.326 10:18:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:51.326 10:18:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:51.326 10:18:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:51.326 10:18:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:51.326 10:18:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:51.326 10:18:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:51.326 10:18:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.326 10:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:51.326 10:18:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:51.326 10:18:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:51.326 10:18:25 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:51.326 10:18:25 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:51.326 10:18:25 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:51.326 10:18:25 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:51.326 10:18:25 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:51.326 10:18:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:51.326 10:18:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.326 10:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:51.326 ************************************ 00:04:51.326 START TEST nvmf_tcp 00:04:51.326 ************************************ 00:04:51.326 10:18:25 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:51.326 * Looking for test storage... 00:04:51.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:51.326 10:18:25 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.326 10:18:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.326 10:18:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.584 10:18:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.584 10:18:25 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.585 10:18:25 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:51.585 10:18:25 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.585 10:18:25 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.585 --rc genhtml_branch_coverage=1 00:04:51.585 --rc genhtml_function_coverage=1 00:04:51.585 --rc genhtml_legend=1 00:04:51.585 --rc geninfo_all_blocks=1 00:04:51.585 --rc geninfo_unexecuted_blocks=1 00:04:51.585 00:04:51.585 ' 00:04:51.585 10:18:25 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.585 --rc genhtml_branch_coverage=1 00:04:51.585 --rc genhtml_function_coverage=1 00:04:51.585 --rc genhtml_legend=1 00:04:51.585 --rc geninfo_all_blocks=1 00:04:51.585 --rc geninfo_unexecuted_blocks=1 00:04:51.585 00:04:51.585 ' 00:04:51.585 10:18:25 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.585 --rc genhtml_branch_coverage=1 00:04:51.585 --rc genhtml_function_coverage=1 00:04:51.585 --rc genhtml_legend=1 00:04:51.585 --rc geninfo_all_blocks=1 00:04:51.585 --rc geninfo_unexecuted_blocks=1 00:04:51.585 00:04:51.585 ' 00:04:51.585 10:18:25 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.585 --rc genhtml_branch_coverage=1 00:04:51.585 --rc genhtml_function_coverage=1 00:04:51.585 --rc genhtml_legend=1 00:04:51.585 --rc geninfo_all_blocks=1 00:04:51.585 --rc geninfo_unexecuted_blocks=1 00:04:51.585 00:04:51.585 ' 00:04:51.585 10:18:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:51.585 10:18:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:51.585 10:18:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:51.585 10:18:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:51.585 10:18:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.585 10:18:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.585 ************************************ 00:04:51.585 START TEST nvmf_target_core 00:04:51.585 ************************************ 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:51.585 * Looking for test storage... 00:04:51.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.585 --rc genhtml_branch_coverage=1 00:04:51.585 --rc genhtml_function_coverage=1 00:04:51.585 --rc genhtml_legend=1 00:04:51.585 --rc geninfo_all_blocks=1 00:04:51.585 --rc geninfo_unexecuted_blocks=1 00:04:51.585 00:04:51.585 ' 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.585 --rc genhtml_branch_coverage=1 00:04:51.585 --rc genhtml_function_coverage=1 00:04:51.585 --rc genhtml_legend=1 00:04:51.585 --rc geninfo_all_blocks=1 00:04:51.585 --rc geninfo_unexecuted_blocks=1 00:04:51.585 00:04:51.585 ' 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.585 --rc genhtml_branch_coverage=1 00:04:51.585 --rc genhtml_function_coverage=1 00:04:51.585 --rc genhtml_legend=1 00:04:51.585 --rc geninfo_all_blocks=1 00:04:51.585 --rc geninfo_unexecuted_blocks=1 00:04:51.585 00:04:51.585 ' 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.585 --rc genhtml_branch_coverage=1 00:04:51.585 --rc genhtml_function_coverage=1 00:04:51.585 --rc genhtml_legend=1 00:04:51.585 --rc geninfo_all_blocks=1 00:04:51.585 --rc geninfo_unexecuted_blocks=1 00:04:51.585 00:04:51.585 ' 00:04:51.585 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:51.845 ************************************ 00:04:51.845 START TEST nvmf_abort 00:04:51.845 ************************************ 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:51.845 * Looking for test storage... 00:04:51.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.845 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.846 --rc genhtml_branch_coverage=1 00:04:51.846 --rc genhtml_function_coverage=1 00:04:51.846 --rc genhtml_legend=1 00:04:51.846 --rc geninfo_all_blocks=1 00:04:51.846 --rc geninfo_unexecuted_blocks=1 00:04:51.846 00:04:51.846 ' 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.846 --rc genhtml_branch_coverage=1 00:04:51.846 --rc genhtml_function_coverage=1 00:04:51.846 --rc genhtml_legend=1 00:04:51.846 --rc geninfo_all_blocks=1 00:04:51.846 --rc geninfo_unexecuted_blocks=1 00:04:51.846 00:04:51.846 ' 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.846 --rc genhtml_branch_coverage=1 00:04:51.846 --rc genhtml_function_coverage=1 00:04:51.846 --rc genhtml_legend=1 00:04:51.846 --rc geninfo_all_blocks=1 00:04:51.846 --rc geninfo_unexecuted_blocks=1 00:04:51.846 00:04:51.846 ' 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.846 --rc genhtml_branch_coverage=1 00:04:51.846 --rc genhtml_function_coverage=1 00:04:51.846 --rc genhtml_legend=1 00:04:51.846 --rc geninfo_all_blocks=1 00:04:51.846 --rc geninfo_unexecuted_blocks=1 00:04:51.846 00:04:51.846 ' 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.846 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.105 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.105 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.105 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:52.106 10:18:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:04:58.677 Found 0000:af:00.0 (0x8086 - 0x159b) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:04:58.677 Found 0000:af:00.1 (0x8086 - 0x159b) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:04:58.677 Found net devices under 0000:af:00.0: cvl_0_0 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:04:58.677 Found net devices under 0000:af:00.1: cvl_0_1 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:58.677 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:58.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:58.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:04:58.678 00:04:58.678 --- 10.0.0.2 ping statistics --- 00:04:58.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:58.678 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:58.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:58.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:04:58.678 00:04:58.678 --- 10.0.0.1 ping statistics --- 00:04:58.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:58.678 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1341016 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1341016 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1341016 ']' 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.678 10:18:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 [2024-12-12 10:18:32.011201] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:04:58.678 [2024-12-12 10:18:32.011246] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:58.678 [2024-12-12 10:18:32.090350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.678 [2024-12-12 10:18:32.133070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:58.678 [2024-12-12 10:18:32.133106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:58.678 [2024-12-12 10:18:32.133113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:58.678 [2024-12-12 10:18:32.133119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:58.678 [2024-12-12 10:18:32.133125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:58.678 [2024-12-12 10:18:32.134422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.678 [2024-12-12 10:18:32.134471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.678 [2024-12-12 10:18:32.134471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 [2024-12-12 10:18:32.278777] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 Malloc0 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 Delay0 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 [2024-12-12 10:18:32.367803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.678 10:18:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:58.678 [2024-12-12 10:18:32.500226] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:00.582 Initializing NVMe Controllers 00:05:00.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:00.582 controller IO queue size 128 less than required 00:05:00.582 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:00.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:00.582 Initialization complete. Launching workers. 00:05:00.582 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 37181 00:05:00.582 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37244, failed to submit 62 00:05:00.582 success 37185, unsuccessful 59, failed 0 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:00.582 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:00.582 rmmod nvme_tcp 00:05:00.841 rmmod nvme_fabrics 00:05:00.841 rmmod nvme_keyring 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1341016 ']' 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1341016 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1341016 ']' 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1341016 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1341016 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1341016' 00:05:00.841 killing process with pid 1341016 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1341016 00:05:00.841 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1341016 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:01.100 10:18:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:03.004 10:18:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:03.004 00:05:03.004 real 0m11.290s 00:05:03.004 user 0m11.670s 00:05:03.004 sys 0m5.415s 00:05:03.004 10:18:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.004 10:18:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.004 ************************************ 00:05:03.004 END TEST nvmf_abort 00:05:03.004 ************************************ 00:05:03.004 10:18:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:03.004 10:18:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:03.004 10:18:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.004 10:18:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:03.264 ************************************ 00:05:03.264 START TEST nvmf_ns_hotplug_stress 00:05:03.264 ************************************ 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:03.264 * Looking for test storage... 00:05:03.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.264 --rc genhtml_branch_coverage=1 00:05:03.264 --rc genhtml_function_coverage=1 00:05:03.264 --rc genhtml_legend=1 00:05:03.264 --rc geninfo_all_blocks=1 00:05:03.264 --rc geninfo_unexecuted_blocks=1 00:05:03.264 00:05:03.264 ' 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.264 --rc genhtml_branch_coverage=1 00:05:03.264 --rc genhtml_function_coverage=1 00:05:03.264 --rc genhtml_legend=1 00:05:03.264 --rc geninfo_all_blocks=1 00:05:03.264 --rc geninfo_unexecuted_blocks=1 00:05:03.264 00:05:03.264 ' 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.264 --rc genhtml_branch_coverage=1 00:05:03.264 --rc genhtml_function_coverage=1 00:05:03.264 --rc genhtml_legend=1 00:05:03.264 --rc geninfo_all_blocks=1 00:05:03.264 --rc geninfo_unexecuted_blocks=1 00:05:03.264 00:05:03.264 ' 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.264 --rc genhtml_branch_coverage=1 00:05:03.264 --rc genhtml_function_coverage=1 00:05:03.264 --rc genhtml_legend=1 00:05:03.264 --rc geninfo_all_blocks=1 00:05:03.264 --rc geninfo_unexecuted_blocks=1 00:05:03.264 00:05:03.264 ' 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.264 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:03.265 10:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:09.837 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:09.837 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:09.837 Found net devices under 0000:af:00.0: cvl_0_0 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:09.837 Found net devices under 0000:af:00.1: cvl_0_1 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:09.837 10:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:09.837 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:09.837 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:09.837 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:09.837 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:09.837 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:09.837 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:09.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:09.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:05:09.838 00:05:09.838 --- 10.0.0.2 ping statistics --- 00:05:09.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:09.838 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:09.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:09.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:05:09.838 00:05:09.838 --- 10.0.0.1 ping statistics --- 00:05:09.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:09.838 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1345131 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1345131 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1345131 ']' 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:09.838 [2024-12-12 10:18:43.277766] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:09.838 [2024-12-12 10:18:43.277817] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:09.838 [2024-12-12 10:18:43.355300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.838 [2024-12-12 10:18:43.396043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:09.838 [2024-12-12 10:18:43.396076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:09.838 [2024-12-12 10:18:43.396083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.838 [2024-12-12 10:18:43.396089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.838 [2024-12-12 10:18:43.396094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:09.838 [2024-12-12 10:18:43.397375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.838 [2024-12-12 10:18:43.397413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.838 [2024-12-12 10:18:43.397414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:09.838 [2024-12-12 10:18:43.706894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.838 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:10.097 10:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:10.097 [2024-12-12 10:18:44.096276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:10.355 10:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:10.355 10:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:10.614 Malloc0 00:05:10.614 10:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:10.872 Delay0 00:05:10.872 10:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.131 10:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:11.131 NULL1 00:05:11.131 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:11.389 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:11.389 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1345431 00:05:11.389 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:11.389 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.647 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.905 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:11.905 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:11.905 true 00:05:12.164 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:12.164 10:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.164 10:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.422 10:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:12.422 10:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:12.680 true 00:05:12.680 10:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:12.680 10:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.616 Read completed with error (sct=0, sc=11) 00:05:13.875 10:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.875 10:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:13.875 10:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:14.133 true 00:05:14.133 10:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:14.133 10:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.392 10:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.650 10:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:14.650 10:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:14.650 true 00:05:14.650 10:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:14.650 10:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.028 10:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.287 10:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:16.287 10:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:16.287 true 00:05:16.287 10:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:16.287 10:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.225 10:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.484 10:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:17.484 10:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:17.484 true 00:05:17.484 10:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:17.484 10:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.743 10:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.002 10:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:18.003 10:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:18.261 true 00:05:18.261 10:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:18.261 10:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.261 10:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.520 10:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:18.520 10:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:18.780 true 00:05:18.780 10:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:18.780 10:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.716 10:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.716 10:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:19.716 10:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:19.975 true 00:05:19.975 10:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:19.975 10:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.234 10:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.492 10:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:20.492 10:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:20.492 true 00:05:20.492 10:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:20.492 10:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.868 10:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.868 10:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:21.868 10:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:22.126 true 00:05:22.126 10:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:22.126 10:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.060 10:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.060 10:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:23.060 10:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:23.319 true 00:05:23.319 10:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:23.319 10:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.577 10:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.836 10:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:23.836 10:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:23.836 true 00:05:23.836 10:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:23.836 10:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.212 10:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.470 10:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:25.470 10:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:25.470 true 00:05:25.470 10:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:25.470 10:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.405 10:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.664 10:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:26.664 10:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:26.664 true 00:05:26.664 10:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:26.664 10:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.923 10:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.182 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:27.182 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:27.440 true 00:05:27.440 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:27.440 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.376 10:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.677 10:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:28.677 10:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:28.951 true 00:05:28.951 10:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:28.951 10:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.546 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.804 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:29.804 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:30.063 true 00:05:30.063 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:30.063 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.323 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.323 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:30.323 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:30.580 true 00:05:30.580 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:30.580 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.954 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.954 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:31.954 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:32.212 true 00:05:32.212 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:32.212 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.148 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.148 10:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:33.148 10:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:33.406 true 00:05:33.406 10:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:33.406 10:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.665 10:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.665 10:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:33.665 10:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:33.924 true 00:05:33.924 10:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:33.924 10:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.301 10:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.302 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:35.302 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:35.560 true 00:05:35.560 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:35.560 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.497 10:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.497 10:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:36.497 10:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:36.755 true 00:05:36.755 10:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:36.755 10:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.014 10:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.014 10:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:37.014 10:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:37.273 true 00:05:37.273 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:37.273 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.208 10:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.467 10:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:38.467 10:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:38.725 true 00:05:38.725 10:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:38.725 10:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.661 10:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.661 10:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:39.661 10:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:39.920 true 00:05:39.920 10:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:39.920 10:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.178 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.437 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:40.437 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:40.437 true 00:05:40.437 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:40.437 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.813 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.813 Initializing NVMe Controllers 00:05:41.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:41.813 Controller IO queue size 128, less than required. 00:05:41.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:41.813 Controller IO queue size 128, less than required. 00:05:41.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:41.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:41.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:41.813 Initialization complete. Launching workers. 00:05:41.813 ======================================================== 00:05:41.813 Latency(us) 00:05:41.813 Device Information : IOPS MiB/s Average min max 00:05:41.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2180.23 1.06 38731.20 1960.94 1016729.97 00:05:41.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17263.88 8.43 7396.56 1579.67 295835.11 00:05:41.813 ======================================================== 00:05:41.813 Total : 19444.11 9.49 10910.05 1579.67 1016729.97 00:05:41.813 00:05:41.813 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:41.813 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:42.072 true 00:05:42.072 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1345431 00:05:42.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1345431) - No such process 00:05:42.072 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1345431 00:05:42.072 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.330 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.589 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:42.589 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:42.589 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:42.589 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:42.589 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:42.589 null0 00:05:42.589 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:42.589 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:42.589 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:42.847 null1 00:05:42.847 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:42.848 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:42.848 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:43.106 null2 00:05:43.106 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.106 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.106 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:43.106 null3 00:05:43.106 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.106 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.106 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:43.365 null4 00:05:43.365 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.365 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.365 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:43.623 null5 00:05:43.623 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.623 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.623 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:43.883 null6 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:43.883 null7 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:43.883 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1350894 1350895 1350897 1350899 1350901 1350903 1350905 1350906 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.884 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.144 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.144 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.144 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.144 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.144 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.144 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.144 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.144 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.403 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.662 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.662 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.662 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.662 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.662 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.662 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.663 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.663 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.922 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.181 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.182 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:45.440 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:45.441 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.441 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.441 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.441 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.441 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:45.441 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:45.441 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:45.699 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.700 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.960 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.219 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.219 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.219 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.219 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.219 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.219 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.219 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.219 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.219 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.479 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.739 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.739 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.739 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.739 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.739 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.739 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.739 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.739 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.998 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.998 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.998 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.998 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.998 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.998 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.258 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.517 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.517 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.517 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.517 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.517 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.517 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.517 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.517 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.777 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.036 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.036 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.036 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.036 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.036 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.036 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.036 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.036 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.036 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.036 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.037 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:48.296 rmmod nvme_tcp 00:05:48.296 rmmod nvme_fabrics 00:05:48.296 rmmod nvme_keyring 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1345131 ']' 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1345131 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1345131 ']' 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1345131 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1345131 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1345131' 00:05:48.296 killing process with pid 1345131 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1345131 00:05:48.296 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1345131 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:48.556 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:50.462 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:50.462 00:05:50.462 real 0m47.383s 00:05:50.462 user 3m13.803s 00:05:50.462 sys 0m15.619s 00:05:50.462 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.462 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:50.462 ************************************ 00:05:50.462 END TEST nvmf_ns_hotplug_stress 00:05:50.462 ************************************ 00:05:50.462 10:19:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:50.462 10:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:50.462 10:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.462 10:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:50.722 ************************************ 00:05:50.722 START TEST nvmf_delete_subsystem 00:05:50.722 ************************************ 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:50.722 * Looking for test storage... 00:05:50.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.722 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.723 --rc genhtml_branch_coverage=1 00:05:50.723 --rc genhtml_function_coverage=1 00:05:50.723 --rc genhtml_legend=1 00:05:50.723 --rc geninfo_all_blocks=1 00:05:50.723 --rc geninfo_unexecuted_blocks=1 00:05:50.723 00:05:50.723 ' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.723 --rc genhtml_branch_coverage=1 00:05:50.723 --rc genhtml_function_coverage=1 00:05:50.723 --rc genhtml_legend=1 00:05:50.723 --rc geninfo_all_blocks=1 00:05:50.723 --rc geninfo_unexecuted_blocks=1 00:05:50.723 00:05:50.723 ' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.723 --rc genhtml_branch_coverage=1 00:05:50.723 --rc genhtml_function_coverage=1 00:05:50.723 --rc genhtml_legend=1 00:05:50.723 --rc geninfo_all_blocks=1 00:05:50.723 --rc geninfo_unexecuted_blocks=1 00:05:50.723 00:05:50.723 ' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.723 --rc genhtml_branch_coverage=1 00:05:50.723 --rc genhtml_function_coverage=1 00:05:50.723 --rc genhtml_legend=1 00:05:50.723 --rc geninfo_all_blocks=1 00:05:50.723 --rc geninfo_unexecuted_blocks=1 00:05:50.723 00:05:50.723 ' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:50.723 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:50.724 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:57.292 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:57.292 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:57.292 Found net devices under 0000:af:00.0: cvl_0_0 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:57.292 Found net devices under 0000:af:00.1: cvl_0_1 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:57.292 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:57.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:57.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:05:57.293 00:05:57.293 --- 10.0.0.2 ping statistics --- 00:05:57.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:57.293 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:57.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:57.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:05:57.293 00:05:57.293 --- 10.0.0.1 ping statistics --- 00:05:57.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:57.293 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1355356 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1355356 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1355356 ']' 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.293 [2024-12-12 10:19:30.670545] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:05:57.293 [2024-12-12 10:19:30.670601] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:57.293 [2024-12-12 10:19:30.748431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.293 [2024-12-12 10:19:30.787496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:57.293 [2024-12-12 10:19:30.787532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:57.293 [2024-12-12 10:19:30.787539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.293 [2024-12-12 10:19:30.787545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.293 [2024-12-12 10:19:30.787550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:57.293 [2024-12-12 10:19:30.788708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.293 [2024-12-12 10:19:30.788709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.293 [2024-12-12 10:19:30.937615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.293 [2024-12-12 10:19:30.957830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.293 NULL1 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.293 Delay0 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1355446 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:57.293 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:57.293 [2024-12-12 10:19:31.068719] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:59.195 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:59.195 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.195 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 starting I/O failed: -6 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 [2024-12-12 10:19:33.103980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f780 is same with the state(6) to be set 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Write completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.195 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 [2024-12-12 10:19:33.105098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fb40 is same with the state(6) to be set 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 starting I/O failed: -6 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 [2024-12-12 10:19:33.108295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e78000c80 is same with the state(6) to be set 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:05:59.196 Write completed with error (sct=0, sc=8) 00:05:59.196 Read completed with error (sct=0, sc=8) 00:06:00.131 [2024-12-12 10:19:34.081199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a09b0 is same with the state(6) to be set 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 [2024-12-12 10:19:34.107661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f960 is same with the state(6) to be set 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 [2024-12-12 10:19:34.107786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f2c0 is same with the state(6) to be set 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 [2024-12-12 10:19:34.110309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e7800d060 is same with the state(6) to be set 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Read completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 Write completed with error (sct=0, sc=8) 00:06:00.131 [2024-12-12 10:19:34.110816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e7800d800 is same with the state(6) to be set 00:06:00.131 Initializing NVMe Controllers 00:06:00.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:00.131 Controller IO queue size 128, less than required. 00:06:00.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:00.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:00.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:00.131 Initialization complete. Launching workers. 00:06:00.131 ======================================================== 00:06:00.131 Latency(us) 00:06:00.131 Device Information : IOPS MiB/s Average min max 00:06:00.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.27 0.08 911290.54 753.42 1006425.20 00:06:00.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.78 0.08 927380.32 247.87 2000784.72 00:06:00.131 ======================================================== 00:06:00.131 Total : 324.05 0.16 919323.07 247.87 2000784.72 00:06:00.131 00:06:00.131 [2024-12-12 10:19:34.111436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a09b0 (9): Bad file descriptor 00:06:00.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:00.131 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.131 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:00.131 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1355446 00:06:00.131 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1355446 00:06:00.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1355446) - No such process 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1355446 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1355446 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1355446 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.698 [2024-12-12 10:19:34.641094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1356033 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356033 00:06:00.698 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.957 [2024-12-12 10:19:34.730465] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:01.214 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.214 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356033 00:06:01.214 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.780 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.780 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356033 00:06:01.780 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:02.346 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:02.346 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356033 00:06:02.346 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:02.912 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:02.912 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356033 00:06:02.912 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.206 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.206 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356033 00:06:03.206 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.861 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.861 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356033 00:06:03.861 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.861 Initializing NVMe Controllers 00:06:03.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:03.861 Controller IO queue size 128, less than required. 00:06:03.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:03.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:03.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:03.861 Initialization complete. Launching workers. 00:06:03.861 ======================================================== 00:06:03.861 Latency(us) 00:06:03.861 Device Information : IOPS MiB/s Average min max 00:06:03.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002514.10 1000177.92 1042266.81 00:06:03.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004045.84 1000156.40 1042286.07 00:06:03.861 ======================================================== 00:06:03.861 Total : 256.00 0.12 1003279.97 1000156.40 1042286.07 00:06:03.861 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356033 00:06:04.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1356033) - No such process 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1356033 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:04.428 rmmod nvme_tcp 00:06:04.428 rmmod nvme_fabrics 00:06:04.428 rmmod nvme_keyring 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1355356 ']' 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1355356 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1355356 ']' 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1355356 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1355356 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1355356' 00:06:04.428 killing process with pid 1355356 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1355356 00:06:04.428 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1355356 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.687 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.591 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:06.591 00:06:06.591 real 0m16.047s 00:06:06.591 user 0m29.049s 00:06:06.591 sys 0m5.415s 00:06:06.591 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.591 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.591 ************************************ 00:06:06.591 END TEST nvmf_delete_subsystem 00:06:06.591 ************************************ 00:06:06.591 10:19:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:06.591 10:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.591 10:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.591 10:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:06.850 ************************************ 00:06:06.850 START TEST nvmf_host_management 00:06:06.850 ************************************ 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:06.850 * Looking for test storage... 00:06:06.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.850 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.851 --rc genhtml_branch_coverage=1 00:06:06.851 --rc genhtml_function_coverage=1 00:06:06.851 --rc genhtml_legend=1 00:06:06.851 --rc geninfo_all_blocks=1 00:06:06.851 --rc geninfo_unexecuted_blocks=1 00:06:06.851 00:06:06.851 ' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.851 --rc genhtml_branch_coverage=1 00:06:06.851 --rc genhtml_function_coverage=1 00:06:06.851 --rc genhtml_legend=1 00:06:06.851 --rc geninfo_all_blocks=1 00:06:06.851 --rc geninfo_unexecuted_blocks=1 00:06:06.851 00:06:06.851 ' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.851 --rc genhtml_branch_coverage=1 00:06:06.851 --rc genhtml_function_coverage=1 00:06:06.851 --rc genhtml_legend=1 00:06:06.851 --rc geninfo_all_blocks=1 00:06:06.851 --rc geninfo_unexecuted_blocks=1 00:06:06.851 00:06:06.851 ' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.851 --rc genhtml_branch_coverage=1 00:06:06.851 --rc genhtml_function_coverage=1 00:06:06.851 --rc genhtml_legend=1 00:06:06.851 --rc geninfo_all_blocks=1 00:06:06.851 --rc geninfo_unexecuted_blocks=1 00:06:06.851 00:06:06.851 ' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.851 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.852 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:06.852 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:06.852 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:06.852 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:13.417 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.417 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:13.418 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:13.418 Found net devices under 0000:af:00.0: cvl_0_0 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:13.418 Found net devices under 0000:af:00.1: cvl_0_1 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:13.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:06:13.418 00:06:13.418 --- 10.0.0.2 ping statistics --- 00:06:13.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.418 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:06:13.418 00:06:13.418 --- 10.0.0.1 ping statistics --- 00:06:13.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.418 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1360085 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1360085 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1360085 ']' 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.418 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.418 [2024-12-12 10:19:46.907814] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:13.418 [2024-12-12 10:19:46.907860] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.418 [2024-12-12 10:19:46.987417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.418 [2024-12-12 10:19:47.027131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.418 [2024-12-12 10:19:47.027175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.418 [2024-12-12 10:19:47.027184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.418 [2024-12-12 10:19:47.027189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.418 [2024-12-12 10:19:47.027194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.419 [2024-12-12 10:19:47.028660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.419 [2024-12-12 10:19:47.028767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.419 [2024-12-12 10:19:47.028849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.419 [2024-12-12 10:19:47.028851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.419 [2024-12-12 10:19:47.173191] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.419 Malloc0 00:06:13.419 [2024-12-12 10:19:47.250283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1360317 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1360317 /var/tmp/bdevperf.sock 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1360317 ']' 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:13.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:13.419 { 00:06:13.419 "params": { 00:06:13.419 "name": "Nvme$subsystem", 00:06:13.419 "trtype": "$TEST_TRANSPORT", 00:06:13.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:13.419 "adrfam": "ipv4", 00:06:13.419 "trsvcid": "$NVMF_PORT", 00:06:13.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:13.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:13.419 "hdgst": ${hdgst:-false}, 00:06:13.419 "ddgst": ${ddgst:-false} 00:06:13.419 }, 00:06:13.419 "method": "bdev_nvme_attach_controller" 00:06:13.419 } 00:06:13.419 EOF 00:06:13.419 )") 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:13.419 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:13.419 "params": { 00:06:13.419 "name": "Nvme0", 00:06:13.419 "trtype": "tcp", 00:06:13.419 "traddr": "10.0.0.2", 00:06:13.419 "adrfam": "ipv4", 00:06:13.419 "trsvcid": "4420", 00:06:13.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:13.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:13.419 "hdgst": false, 00:06:13.419 "ddgst": false 00:06:13.419 }, 00:06:13.419 "method": "bdev_nvme_attach_controller" 00:06:13.419 }' 00:06:13.419 [2024-12-12 10:19:47.343059] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:13.419 [2024-12-12 10:19:47.343102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360317 ] 00:06:13.419 [2024-12-12 10:19:47.417237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.677 [2024-12-12 10:19:47.458493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.936 Running I/O for 10 seconds... 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=85 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 85 -ge 100 ']' 00:06:13.936 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.197 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.197 [2024-12-12 10:19:48.174928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.197 [2024-12-12 10:19:48.175343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.198 [2024-12-12 10:19:48.175348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.198 [2024-12-12 10:19:48.175356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.198 [2024-12-12 10:19:48.175362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.198 [2024-12-12 10:19:48.175368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f304d0 is same with the state(6) to be set 00:06:14.198 [2024-12-12 10:19:48.175442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.175989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.175995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.176003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.176010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.176018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.198 [2024-12-12 10:19:48.176024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.198 [2024-12-12 10:19:48.176033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.199 [2024-12-12 10:19:48.176425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.176432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a00770 is same with the state(6) to be set 00:06:14.199 [2024-12-12 10:19:48.177385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:14.199 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.199 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:14.199 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:14.199 00:06:14.199 Latency(us) 00:06:14.199 [2024-12-12T09:19:48.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:14.199 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:14.199 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:14.199 Verification LBA range: start 0x0 length 0x400 00:06:14.199 Nvme0n1 : 0.40 1925.81 120.36 160.48 0.00 29855.48 3542.06 26838.55 00:06:14.199 [2024-12-12T09:19:48.222Z] =================================================================================================================== 00:06:14.199 [2024-12-12T09:19:48.222Z] Total : 1925.81 120.36 160.48 0.00 29855.48 3542.06 26838.55 00:06:14.199 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.199 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.199 [2024-12-12 10:19:48.179760] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.199 [2024-12-12 10:19:48.179781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e77e0 (9): Bad file descriptor 00:06:14.199 [2024-12-12 10:19:48.183137] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:14.199 [2024-12-12 10:19:48.183207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:14.199 [2024-12-12 10:19:48.183228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.199 [2024-12-12 10:19:48.183242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:14.199 [2024-12-12 10:19:48.183250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:14.199 [2024-12-12 10:19:48.183257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:14.199 [2024-12-12 10:19:48.183264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e77e0 00:06:14.199 [2024-12-12 10:19:48.183282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e77e0 (9): Bad file descriptor 00:06:14.199 [2024-12-12 10:19:48.183293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:14.199 [2024-12-12 10:19:48.183299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:14.199 [2024-12-12 10:19:48.183308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:14.199 [2024-12-12 10:19:48.183315] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:14.199 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.199 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:15.229 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1360317 00:06:15.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1360317) - No such process 00:06:15.229 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:15.229 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:15.229 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:15.229 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:15.229 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:15.229 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:15.229 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:15.229 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:15.229 { 00:06:15.229 "params": { 00:06:15.229 "name": "Nvme$subsystem", 00:06:15.230 "trtype": "$TEST_TRANSPORT", 00:06:15.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:15.230 "adrfam": "ipv4", 00:06:15.230 "trsvcid": "$NVMF_PORT", 00:06:15.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:15.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:15.230 "hdgst": ${hdgst:-false}, 00:06:15.230 "ddgst": ${ddgst:-false} 00:06:15.230 }, 00:06:15.230 "method": "bdev_nvme_attach_controller" 00:06:15.230 } 00:06:15.230 EOF 00:06:15.230 )") 00:06:15.230 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:15.230 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:15.230 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:15.230 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:15.230 "params": { 00:06:15.230 "name": "Nvme0", 00:06:15.230 "trtype": "tcp", 00:06:15.230 "traddr": "10.0.0.2", 00:06:15.230 "adrfam": "ipv4", 00:06:15.230 "trsvcid": "4420", 00:06:15.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:15.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:15.230 "hdgst": false, 00:06:15.230 "ddgst": false 00:06:15.230 }, 00:06:15.230 "method": "bdev_nvme_attach_controller" 00:06:15.230 }' 00:06:15.230 [2024-12-12 10:19:49.244271] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:15.230 [2024-12-12 10:19:49.244317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360567 ] 00:06:15.487 [2024-12-12 10:19:49.319065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.487 [2024-12-12 10:19:49.359441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.745 Running I/O for 1 seconds... 00:06:17.120 1984.00 IOPS, 124.00 MiB/s 00:06:17.120 Latency(us) 00:06:17.120 [2024-12-12T09:19:51.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:17.120 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:17.120 Verification LBA range: start 0x0 length 0x400 00:06:17.120 Nvme0n1 : 1.01 2035.87 127.24 0.00 0.00 30915.11 4337.86 29459.99 00:06:17.120 [2024-12-12T09:19:51.143Z] =================================================================================================================== 00:06:17.120 [2024-12-12T09:19:51.143Z] Total : 2035.87 127.24 0.00 0.00 30915.11 4337.86 29459.99 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:17.120 rmmod nvme_tcp 00:06:17.120 rmmod nvme_fabrics 00:06:17.120 rmmod nvme_keyring 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1360085 ']' 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1360085 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1360085 ']' 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1360085 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.120 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1360085 00:06:17.120 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:17.120 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:17.120 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1360085' 00:06:17.120 killing process with pid 1360085 00:06:17.120 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1360085 00:06:17.120 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1360085 00:06:17.379 [2024-12-12 10:19:51.184040] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.379 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.280 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:19.280 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:19.280 00:06:19.280 real 0m12.664s 00:06:19.280 user 0m20.768s 00:06:19.280 sys 0m5.581s 00:06:19.280 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.280 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.280 ************************************ 00:06:19.281 END TEST nvmf_host_management 00:06:19.281 ************************************ 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.539 ************************************ 00:06:19.539 START TEST nvmf_lvol 00:06:19.539 ************************************ 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:19.539 * Looking for test storage... 00:06:19.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.539 --rc genhtml_branch_coverage=1 00:06:19.539 --rc genhtml_function_coverage=1 00:06:19.539 --rc genhtml_legend=1 00:06:19.539 --rc geninfo_all_blocks=1 00:06:19.539 --rc geninfo_unexecuted_blocks=1 00:06:19.539 00:06:19.539 ' 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.539 --rc genhtml_branch_coverage=1 00:06:19.539 --rc genhtml_function_coverage=1 00:06:19.539 --rc genhtml_legend=1 00:06:19.539 --rc geninfo_all_blocks=1 00:06:19.539 --rc geninfo_unexecuted_blocks=1 00:06:19.539 00:06:19.539 ' 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.539 --rc genhtml_branch_coverage=1 00:06:19.539 --rc genhtml_function_coverage=1 00:06:19.539 --rc genhtml_legend=1 00:06:19.539 --rc geninfo_all_blocks=1 00:06:19.539 --rc geninfo_unexecuted_blocks=1 00:06:19.539 00:06:19.539 ' 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.539 --rc genhtml_branch_coverage=1 00:06:19.539 --rc genhtml_function_coverage=1 00:06:19.539 --rc genhtml_legend=1 00:06:19.539 --rc geninfo_all_blocks=1 00:06:19.539 --rc geninfo_unexecuted_blocks=1 00:06:19.539 00:06:19.539 ' 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.539 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.540 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.798 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.798 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:19.798 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:19.798 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.798 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:26.362 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:26.362 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.362 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:26.363 Found net devices under 0000:af:00.0: cvl_0_0 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:26.363 Found net devices under 0000:af:00.1: cvl_0_1 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:26.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:06:26.363 00:06:26.363 --- 10.0.0.2 ping statistics --- 00:06:26.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.363 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:06:26.363 00:06:26.363 --- 10.0.0.1 ping statistics --- 00:06:26.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.363 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1364487 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1364487 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1364487 ']' 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.363 [2024-12-12 10:19:59.612827] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:26.363 [2024-12-12 10:19:59.612869] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.363 [2024-12-12 10:19:59.687861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.363 [2024-12-12 10:19:59.729055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.363 [2024-12-12 10:19:59.729092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.363 [2024-12-12 10:19:59.729099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.363 [2024-12-12 10:19:59.729105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.363 [2024-12-12 10:19:59.729110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.363 [2024-12-12 10:19:59.730327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.363 [2024-12-12 10:19:59.730437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.363 [2024-12-12 10:19:59.730438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.363 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:26.363 [2024-12-12 10:20:00.032490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.363 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.363 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:26.363 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.622 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:26.622 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:26.880 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:27.137 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3e61f0b1-780d-46f4-9251-7d83101d10a5 00:06:27.138 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3e61f0b1-780d-46f4-9251-7d83101d10a5 lvol 20 00:06:27.138 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=98be03ab-84f7-4924-92f8-386f3d9d5bcf 00:06:27.138 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:27.395 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 98be03ab-84f7-4924-92f8-386f3d9d5bcf 00:06:27.653 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:27.910 [2024-12-12 10:20:01.680099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:27.910 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:27.910 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1364769 00:06:27.910 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:27.910 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:29.284 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 98be03ab-84f7-4924-92f8-386f3d9d5bcf MY_SNAPSHOT 00:06:29.284 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7fbb998f-621e-452c-9491-3625a806a813 00:06:29.284 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 98be03ab-84f7-4924-92f8-386f3d9d5bcf 30 00:06:29.542 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7fbb998f-621e-452c-9491-3625a806a813 MY_CLONE 00:06:29.800 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0e5da64c-99a8-4d74-8187-b609d13db9d8 00:06:29.800 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0e5da64c-99a8-4d74-8187-b609d13db9d8 00:06:30.366 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1364769 00:06:38.476 Initializing NVMe Controllers 00:06:38.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:38.476 Controller IO queue size 128, less than required. 00:06:38.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:38.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:38.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:38.476 Initialization complete. Launching workers. 00:06:38.476 ======================================================== 00:06:38.476 Latency(us) 00:06:38.476 Device Information : IOPS MiB/s Average min max 00:06:38.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12212.30 47.70 10482.25 1547.75 52943.60 00:06:38.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12366.70 48.31 10350.54 3513.27 51676.84 00:06:38.476 ======================================================== 00:06:38.476 Total : 24579.00 96.01 10415.98 1547.75 52943.60 00:06:38.476 00:06:38.476 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:38.734 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98be03ab-84f7-4924-92f8-386f3d9d5bcf 00:06:38.734 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e61f0b1-780d-46f4-9251-7d83101d10a5 00:06:38.992 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:38.992 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:38.992 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:38.992 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:38.992 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:38.992 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:38.992 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:38.992 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.992 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:38.992 rmmod nvme_tcp 00:06:38.992 rmmod nvme_fabrics 00:06:38.992 rmmod nvme_keyring 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1364487 ']' 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1364487 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1364487 ']' 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1364487 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1364487 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1364487' 00:06:39.250 killing process with pid 1364487 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1364487 00:06:39.250 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1364487 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.508 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.412 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:41.412 00:06:41.412 real 0m22.001s 00:06:41.412 user 1m3.167s 00:06:41.412 sys 0m7.732s 00:06:41.412 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.412 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:41.412 ************************************ 00:06:41.412 END TEST nvmf_lvol 00:06:41.412 ************************************ 00:06:41.412 10:20:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:41.412 10:20:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.412 10:20:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.412 10:20:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:41.412 ************************************ 00:06:41.412 START TEST nvmf_lvs_grow 00:06:41.412 ************************************ 00:06:41.412 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:41.671 * Looking for test storage... 00:06:41.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.671 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.671 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.671 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.671 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.672 --rc genhtml_branch_coverage=1 00:06:41.672 --rc genhtml_function_coverage=1 00:06:41.672 --rc genhtml_legend=1 00:06:41.672 --rc geninfo_all_blocks=1 00:06:41.672 --rc geninfo_unexecuted_blocks=1 00:06:41.672 00:06:41.672 ' 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.672 --rc genhtml_branch_coverage=1 00:06:41.672 --rc genhtml_function_coverage=1 00:06:41.672 --rc genhtml_legend=1 00:06:41.672 --rc geninfo_all_blocks=1 00:06:41.672 --rc geninfo_unexecuted_blocks=1 00:06:41.672 00:06:41.672 ' 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.672 --rc genhtml_branch_coverage=1 00:06:41.672 --rc genhtml_function_coverage=1 00:06:41.672 --rc genhtml_legend=1 00:06:41.672 --rc geninfo_all_blocks=1 00:06:41.672 --rc geninfo_unexecuted_blocks=1 00:06:41.672 00:06:41.672 ' 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.672 --rc genhtml_branch_coverage=1 00:06:41.672 --rc genhtml_function_coverage=1 00:06:41.672 --rc genhtml_legend=1 00:06:41.672 --rc geninfo_all_blocks=1 00:06:41.672 --rc geninfo_unexecuted_blocks=1 00:06:41.672 00:06:41.672 ' 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.672 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:41.673 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:48.238 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:48.238 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:48.238 Found net devices under 0000:af:00.0: cvl_0_0 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.238 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:48.239 Found net devices under 0000:af:00.1: cvl_0_1 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:48.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:06:48.239 00:06:48.239 --- 10.0.0.2 ping statistics --- 00:06:48.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.239 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:06:48.239 00:06:48.239 --- 10.0.0.1 ping statistics --- 00:06:48.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.239 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1370240 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1370240 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1370240 ']' 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.239 [2024-12-12 10:20:21.603181] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:48.239 [2024-12-12 10:20:21.603222] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.239 [2024-12-12 10:20:21.677069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.239 [2024-12-12 10:20:21.715428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.239 [2024-12-12 10:20:21.715464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.239 [2024-12-12 10:20:21.715470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.239 [2024-12-12 10:20:21.715476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.239 [2024-12-12 10:20:21.715481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.239 [2024-12-12 10:20:21.716001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.239 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:48.239 [2024-12-12 10:20:22.023940] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.239 ************************************ 00:06:48.239 START TEST lvs_grow_clean 00:06:48.239 ************************************ 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:48.239 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:48.498 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:48.498 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:48.498 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c241d6e3-6169-4570-8cd3-e2f44e152225 00:06:48.498 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:06:48.498 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:48.756 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:48.756 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:48.756 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c241d6e3-6169-4570-8cd3-e2f44e152225 lvol 150 00:06:49.015 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=02749d10-97ab-475f-a0fc-844997c478d8 00:06:49.015 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:49.015 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:49.273 [2024-12-12 10:20:23.040928] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:49.273 [2024-12-12 10:20:23.040978] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:49.273 true 00:06:49.273 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:06:49.273 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:49.273 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:49.273 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:49.532 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 02749d10-97ab-475f-a0fc-844997c478d8 00:06:49.790 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:49.791 [2024-12-12 10:20:23.783127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.791 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1370676 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1370676 /var/tmp/bdevperf.sock 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1370676 ']' 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:50.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.049 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:50.049 [2024-12-12 10:20:24.032820] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:06:50.049 [2024-12-12 10:20:24.032868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1370676 ] 00:06:50.308 [2024-12-12 10:20:24.105625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.308 [2024-12-12 10:20:24.146092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.308 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.308 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:50.308 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:50.875 Nvme0n1 00:06:50.875 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:50.875 [ 00:06:50.875 { 00:06:50.875 "name": "Nvme0n1", 00:06:50.875 "aliases": [ 00:06:50.875 "02749d10-97ab-475f-a0fc-844997c478d8" 00:06:50.875 ], 00:06:50.875 "product_name": "NVMe disk", 00:06:50.875 "block_size": 4096, 00:06:50.875 "num_blocks": 38912, 00:06:50.875 "uuid": "02749d10-97ab-475f-a0fc-844997c478d8", 00:06:50.875 "numa_id": 1, 00:06:50.875 "assigned_rate_limits": { 00:06:50.875 "rw_ios_per_sec": 0, 00:06:50.875 "rw_mbytes_per_sec": 0, 00:06:50.875 "r_mbytes_per_sec": 0, 00:06:50.875 "w_mbytes_per_sec": 0 00:06:50.875 }, 00:06:50.875 "claimed": false, 00:06:50.875 "zoned": false, 00:06:50.875 "supported_io_types": { 00:06:50.876 "read": true, 00:06:50.876 "write": true, 00:06:50.876 "unmap": true, 00:06:50.876 "flush": true, 00:06:50.876 "reset": true, 00:06:50.876 "nvme_admin": true, 00:06:50.876 "nvme_io": true, 00:06:50.876 "nvme_io_md": false, 00:06:50.876 "write_zeroes": true, 00:06:50.876 "zcopy": false, 00:06:50.876 "get_zone_info": false, 00:06:50.876 "zone_management": false, 00:06:50.876 "zone_append": false, 00:06:50.876 "compare": true, 00:06:50.876 "compare_and_write": true, 00:06:50.876 "abort": true, 00:06:50.876 "seek_hole": false, 00:06:50.876 "seek_data": false, 00:06:50.876 "copy": true, 00:06:50.876 "nvme_iov_md": false 00:06:50.876 }, 00:06:50.876 "memory_domains": [ 00:06:50.876 { 00:06:50.876 "dma_device_id": "system", 00:06:50.876 "dma_device_type": 1 00:06:50.876 } 00:06:50.876 ], 00:06:50.876 "driver_specific": { 00:06:50.876 "nvme": [ 00:06:50.876 { 00:06:50.876 "trid": { 00:06:50.876 "trtype": "TCP", 00:06:50.876 "adrfam": "IPv4", 00:06:50.876 "traddr": "10.0.0.2", 00:06:50.876 "trsvcid": "4420", 00:06:50.876 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:50.876 }, 00:06:50.876 "ctrlr_data": { 00:06:50.876 "cntlid": 1, 00:06:50.876 "vendor_id": "0x8086", 00:06:50.876 "model_number": "SPDK bdev Controller", 00:06:50.876 "serial_number": "SPDK0", 00:06:50.876 "firmware_revision": "25.01", 00:06:50.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:50.876 "oacs": { 00:06:50.876 "security": 0, 00:06:50.876 "format": 0, 00:06:50.876 "firmware": 0, 00:06:50.876 "ns_manage": 0 00:06:50.876 }, 00:06:50.876 "multi_ctrlr": true, 00:06:50.876 "ana_reporting": false 00:06:50.876 }, 00:06:50.876 "vs": { 00:06:50.876 "nvme_version": "1.3" 00:06:50.876 }, 00:06:50.876 "ns_data": { 00:06:50.876 "id": 1, 00:06:50.876 "can_share": true 00:06:50.876 } 00:06:50.876 } 00:06:50.876 ], 00:06:50.876 "mp_policy": "active_passive" 00:06:50.876 } 00:06:50.876 } 00:06:50.876 ] 00:06:51.134 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1370743 00:06:51.134 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:51.134 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:51.134 Running I/O for 10 seconds... 00:06:52.069 Latency(us) 00:06:52.069 [2024-12-12T09:20:26.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.070 Nvme0n1 : 1.00 23602.00 92.20 0.00 0.00 0.00 0.00 0.00 00:06:52.070 [2024-12-12T09:20:26.093Z] =================================================================================================================== 00:06:52.070 [2024-12-12T09:20:26.093Z] Total : 23602.00 92.20 0.00 0.00 0.00 0.00 0.00 00:06:52.070 00:06:53.006 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:06:53.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.006 Nvme0n1 : 2.00 23717.50 92.65 0.00 0.00 0.00 0.00 0.00 00:06:53.006 [2024-12-12T09:20:27.029Z] =================================================================================================================== 00:06:53.006 [2024-12-12T09:20:27.029Z] Total : 23717.50 92.65 0.00 0.00 0.00 0.00 0.00 00:06:53.006 00:06:53.264 true 00:06:53.264 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:06:53.264 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:53.523 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:53.523 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:53.523 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1370743 00:06:54.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.090 Nvme0n1 : 3.00 23799.33 92.97 0.00 0.00 0.00 0.00 0.00 00:06:54.090 [2024-12-12T09:20:28.113Z] =================================================================================================================== 00:06:54.090 [2024-12-12T09:20:28.113Z] Total : 23799.33 92.97 0.00 0.00 0.00 0.00 0.00 00:06:54.090 00:06:55.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.024 Nvme0n1 : 4.00 23841.50 93.13 0.00 0.00 0.00 0.00 0.00 00:06:55.024 [2024-12-12T09:20:29.047Z] =================================================================================================================== 00:06:55.024 [2024-12-12T09:20:29.047Z] Total : 23841.50 93.13 0.00 0.00 0.00 0.00 0.00 00:06:55.024 00:06:56.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.400 Nvme0n1 : 5.00 23887.80 93.31 0.00 0.00 0.00 0.00 0.00 00:06:56.400 [2024-12-12T09:20:30.423Z] =================================================================================================================== 00:06:56.400 [2024-12-12T09:20:30.423Z] Total : 23887.80 93.31 0.00 0.00 0.00 0.00 0.00 00:06:56.400 00:06:57.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.335 Nvme0n1 : 6.00 23835.17 93.11 0.00 0.00 0.00 0.00 0.00 00:06:57.335 [2024-12-12T09:20:31.358Z] =================================================================================================================== 00:06:57.335 [2024-12-12T09:20:31.358Z] Total : 23835.17 93.11 0.00 0.00 0.00 0.00 0.00 00:06:57.335 00:06:58.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.271 Nvme0n1 : 7.00 23878.00 93.27 0.00 0.00 0.00 0.00 0.00 00:06:58.271 [2024-12-12T09:20:32.294Z] =================================================================================================================== 00:06:58.271 [2024-12-12T09:20:32.294Z] Total : 23878.00 93.27 0.00 0.00 0.00 0.00 0.00 00:06:58.271 00:06:59.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.206 Nvme0n1 : 8.00 23919.62 93.44 0.00 0.00 0.00 0.00 0.00 00:06:59.206 [2024-12-12T09:20:33.229Z] =================================================================================================================== 00:06:59.206 [2024-12-12T09:20:33.229Z] Total : 23919.62 93.44 0.00 0.00 0.00 0.00 0.00 00:06:59.206 00:07:00.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.142 Nvme0n1 : 9.00 23944.11 93.53 0.00 0.00 0.00 0.00 0.00 00:07:00.142 [2024-12-12T09:20:34.165Z] =================================================================================================================== 00:07:00.142 [2024-12-12T09:20:34.165Z] Total : 23944.11 93.53 0.00 0.00 0.00 0.00 0.00 00:07:00.142 00:07:01.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.078 Nvme0n1 : 10.00 23961.80 93.60 0.00 0.00 0.00 0.00 0.00 00:07:01.078 [2024-12-12T09:20:35.101Z] =================================================================================================================== 00:07:01.078 [2024-12-12T09:20:35.101Z] Total : 23961.80 93.60 0.00 0.00 0.00 0.00 0.00 00:07:01.078 00:07:01.078 00:07:01.078 Latency(us) 00:07:01.078 [2024-12-12T09:20:35.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.078 Nvme0n1 : 10.00 23966.29 93.62 0.00 0.00 5337.77 3105.16 10610.59 00:07:01.078 [2024-12-12T09:20:35.101Z] =================================================================================================================== 00:07:01.078 [2024-12-12T09:20:35.101Z] Total : 23966.29 93.62 0.00 0.00 5337.77 3105.16 10610.59 00:07:01.078 { 00:07:01.078 "results": [ 00:07:01.078 { 00:07:01.078 "job": "Nvme0n1", 00:07:01.078 "core_mask": "0x2", 00:07:01.078 "workload": "randwrite", 00:07:01.078 "status": "finished", 00:07:01.078 "queue_depth": 128, 00:07:01.078 "io_size": 4096, 00:07:01.078 "runtime": 10.003468, 00:07:01.078 "iops": 23966.28849115127, 00:07:01.078 "mibps": 93.61831441855965, 00:07:01.078 "io_failed": 0, 00:07:01.078 "io_timeout": 0, 00:07:01.078 "avg_latency_us": 5337.766575276295, 00:07:01.078 "min_latency_us": 3105.158095238095, 00:07:01.078 "max_latency_us": 10610.590476190477 00:07:01.078 } 00:07:01.078 ], 00:07:01.078 "core_count": 1 00:07:01.078 } 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1370676 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1370676 ']' 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1370676 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1370676 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1370676' 00:07:01.078 killing process with pid 1370676 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1370676 00:07:01.078 Received shutdown signal, test time was about 10.000000 seconds 00:07:01.078 00:07:01.078 Latency(us) 00:07:01.078 [2024-12-12T09:20:35.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.078 [2024-12-12T09:20:35.101Z] =================================================================================================================== 00:07:01.078 [2024-12-12T09:20:35.101Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:01.078 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1370676 00:07:01.336 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.595 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:01.853 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:07:01.853 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:01.853 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:01.853 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:01.853 10:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:02.112 [2024-12-12 10:20:36.008220] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:02.112 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:07:02.371 request: 00:07:02.371 { 00:07:02.371 "uuid": "c241d6e3-6169-4570-8cd3-e2f44e152225", 00:07:02.371 "method": "bdev_lvol_get_lvstores", 00:07:02.371 "req_id": 1 00:07:02.371 } 00:07:02.371 Got JSON-RPC error response 00:07:02.371 response: 00:07:02.371 { 00:07:02.371 "code": -19, 00:07:02.371 "message": "No such device" 00:07:02.371 } 00:07:02.371 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:02.371 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.371 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.371 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.371 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:02.630 aio_bdev 00:07:02.630 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 02749d10-97ab-475f-a0fc-844997c478d8 00:07:02.630 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=02749d10-97ab-475f-a0fc-844997c478d8 00:07:02.630 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:02.630 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:02.630 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:02.630 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:02.630 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:02.888 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 02749d10-97ab-475f-a0fc-844997c478d8 -t 2000 00:07:02.888 [ 00:07:02.888 { 00:07:02.888 "name": "02749d10-97ab-475f-a0fc-844997c478d8", 00:07:02.888 "aliases": [ 00:07:02.888 "lvs/lvol" 00:07:02.888 ], 00:07:02.888 "product_name": "Logical Volume", 00:07:02.888 "block_size": 4096, 00:07:02.888 "num_blocks": 38912, 00:07:02.888 "uuid": "02749d10-97ab-475f-a0fc-844997c478d8", 00:07:02.888 "assigned_rate_limits": { 00:07:02.888 "rw_ios_per_sec": 0, 00:07:02.888 "rw_mbytes_per_sec": 0, 00:07:02.888 "r_mbytes_per_sec": 0, 00:07:02.888 "w_mbytes_per_sec": 0 00:07:02.888 }, 00:07:02.888 "claimed": false, 00:07:02.888 "zoned": false, 00:07:02.888 "supported_io_types": { 00:07:02.888 "read": true, 00:07:02.888 "write": true, 00:07:02.888 "unmap": true, 00:07:02.888 "flush": false, 00:07:02.888 "reset": true, 00:07:02.888 "nvme_admin": false, 00:07:02.888 "nvme_io": false, 00:07:02.888 "nvme_io_md": false, 00:07:02.888 "write_zeroes": true, 00:07:02.888 "zcopy": false, 00:07:02.888 "get_zone_info": false, 00:07:02.888 "zone_management": false, 00:07:02.888 "zone_append": false, 00:07:02.888 "compare": false, 00:07:02.888 "compare_and_write": false, 00:07:02.888 "abort": false, 00:07:02.888 "seek_hole": true, 00:07:02.888 "seek_data": true, 00:07:02.888 "copy": false, 00:07:02.888 "nvme_iov_md": false 00:07:02.888 }, 00:07:02.888 "driver_specific": { 00:07:02.888 "lvol": { 00:07:02.888 "lvol_store_uuid": "c241d6e3-6169-4570-8cd3-e2f44e152225", 00:07:02.888 "base_bdev": "aio_bdev", 00:07:02.888 "thin_provision": false, 00:07:02.888 "num_allocated_clusters": 38, 00:07:02.888 "snapshot": false, 00:07:02.888 "clone": false, 00:07:02.888 "esnap_clone": false 00:07:02.888 } 00:07:02.888 } 00:07:02.888 } 00:07:02.888 ] 00:07:02.888 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:02.888 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:07:02.888 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:03.146 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:03.146 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:07:03.146 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:03.405 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:03.405 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 02749d10-97ab-475f-a0fc-844997c478d8 00:07:03.664 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c241d6e3-6169-4570-8cd3-e2f44e152225 00:07:03.664 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.922 00:07:03.922 real 0m15.781s 00:07:03.922 user 0m15.288s 00:07:03.922 sys 0m1.531s 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:03.922 ************************************ 00:07:03.922 END TEST lvs_grow_clean 00:07:03.922 ************************************ 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.922 ************************************ 00:07:03.922 START TEST lvs_grow_dirty 00:07:03.922 ************************************ 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.922 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:04.181 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:04.181 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:04.181 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:04.439 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:04.439 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:04.439 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:04.697 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:04.697 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:04.697 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 lvol 150 00:07:04.697 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5ac58159-f742-45b4-a301-b2fafcbe5da5 00:07:04.697 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:04.697 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:04.956 [2024-12-12 10:20:38.888594] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:04.956 [2024-12-12 10:20:38.888649] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:04.956 true 00:07:04.956 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:04.956 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:05.214 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:05.214 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.472 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ac58159-f742-45b4-a301-b2fafcbe5da5 00:07:05.472 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:05.731 [2024-12-12 10:20:39.646803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.731 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.989 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:05.990 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1373265 00:07:05.990 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.990 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1373265 /var/tmp/bdevperf.sock 00:07:05.990 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1373265 ']' 00:07:05.990 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:05.990 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.990 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:05.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:05.990 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.990 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:05.990 [2024-12-12 10:20:39.878233] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:05.990 [2024-12-12 10:20:39.878280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1373265 ] 00:07:05.990 [2024-12-12 10:20:39.949214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.990 [2024-12-12 10:20:39.987732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.248 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.248 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:06.248 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:06.506 Nvme0n1 00:07:06.506 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:06.764 [ 00:07:06.764 { 00:07:06.764 "name": "Nvme0n1", 00:07:06.764 "aliases": [ 00:07:06.764 "5ac58159-f742-45b4-a301-b2fafcbe5da5" 00:07:06.764 ], 00:07:06.764 "product_name": "NVMe disk", 00:07:06.764 "block_size": 4096, 00:07:06.764 "num_blocks": 38912, 00:07:06.764 "uuid": "5ac58159-f742-45b4-a301-b2fafcbe5da5", 00:07:06.764 "numa_id": 1, 00:07:06.764 "assigned_rate_limits": { 00:07:06.764 "rw_ios_per_sec": 0, 00:07:06.764 "rw_mbytes_per_sec": 0, 00:07:06.764 "r_mbytes_per_sec": 0, 00:07:06.764 "w_mbytes_per_sec": 0 00:07:06.764 }, 00:07:06.764 "claimed": false, 00:07:06.764 "zoned": false, 00:07:06.764 "supported_io_types": { 00:07:06.764 "read": true, 00:07:06.764 "write": true, 00:07:06.764 "unmap": true, 00:07:06.764 "flush": true, 00:07:06.764 "reset": true, 00:07:06.764 "nvme_admin": true, 00:07:06.764 "nvme_io": true, 00:07:06.764 "nvme_io_md": false, 00:07:06.764 "write_zeroes": true, 00:07:06.764 "zcopy": false, 00:07:06.764 "get_zone_info": false, 00:07:06.764 "zone_management": false, 00:07:06.764 "zone_append": false, 00:07:06.764 "compare": true, 00:07:06.764 "compare_and_write": true, 00:07:06.764 "abort": true, 00:07:06.764 "seek_hole": false, 00:07:06.764 "seek_data": false, 00:07:06.764 "copy": true, 00:07:06.764 "nvme_iov_md": false 00:07:06.764 }, 00:07:06.764 "memory_domains": [ 00:07:06.764 { 00:07:06.764 "dma_device_id": "system", 00:07:06.764 "dma_device_type": 1 00:07:06.764 } 00:07:06.764 ], 00:07:06.764 "driver_specific": { 00:07:06.764 "nvme": [ 00:07:06.764 { 00:07:06.764 "trid": { 00:07:06.764 "trtype": "TCP", 00:07:06.764 "adrfam": "IPv4", 00:07:06.764 "traddr": "10.0.0.2", 00:07:06.764 "trsvcid": "4420", 00:07:06.764 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:06.765 }, 00:07:06.765 "ctrlr_data": { 00:07:06.765 "cntlid": 1, 00:07:06.765 "vendor_id": "0x8086", 00:07:06.765 "model_number": "SPDK bdev Controller", 00:07:06.765 "serial_number": "SPDK0", 00:07:06.765 "firmware_revision": "25.01", 00:07:06.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:06.765 "oacs": { 00:07:06.765 "security": 0, 00:07:06.765 "format": 0, 00:07:06.765 "firmware": 0, 00:07:06.765 "ns_manage": 0 00:07:06.765 }, 00:07:06.765 "multi_ctrlr": true, 00:07:06.765 "ana_reporting": false 00:07:06.765 }, 00:07:06.765 "vs": { 00:07:06.765 "nvme_version": "1.3" 00:07:06.765 }, 00:07:06.765 "ns_data": { 00:07:06.765 "id": 1, 00:07:06.765 "can_share": true 00:07:06.765 } 00:07:06.765 } 00:07:06.765 ], 00:07:06.765 "mp_policy": "active_passive" 00:07:06.765 } 00:07:06.765 } 00:07:06.765 ] 00:07:06.765 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1373491 00:07:06.765 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:06.765 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:06.765 Running I/O for 10 seconds... 00:07:07.700 Latency(us) 00:07:07.700 [2024-12-12T09:20:41.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.700 Nvme0n1 : 1.00 23643.00 92.36 0.00 0.00 0.00 0.00 0.00 00:07:07.700 [2024-12-12T09:20:41.723Z] =================================================================================================================== 00:07:07.700 [2024-12-12T09:20:41.723Z] Total : 23643.00 92.36 0.00 0.00 0.00 0.00 0.00 00:07:07.700 00:07:08.635 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:08.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.894 Nvme0n1 : 2.00 23788.50 92.92 0.00 0.00 0.00 0.00 0.00 00:07:08.894 [2024-12-12T09:20:42.917Z] =================================================================================================================== 00:07:08.894 [2024-12-12T09:20:42.917Z] Total : 23788.50 92.92 0.00 0.00 0.00 0.00 0.00 00:07:08.894 00:07:08.894 true 00:07:08.894 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:08.894 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:09.152 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:09.152 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:09.152 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1373491 00:07:09.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.719 Nvme0n1 : 3.00 23808.67 93.00 0.00 0.00 0.00 0.00 0.00 00:07:09.719 [2024-12-12T09:20:43.742Z] =================================================================================================================== 00:07:09.719 [2024-12-12T09:20:43.742Z] Total : 23808.67 93.00 0.00 0.00 0.00 0.00 0.00 00:07:09.719 00:07:11.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.094 Nvme0n1 : 4.00 23881.50 93.29 0.00 0.00 0.00 0.00 0.00 00:07:11.094 [2024-12-12T09:20:45.117Z] =================================================================================================================== 00:07:11.094 [2024-12-12T09:20:45.117Z] Total : 23881.50 93.29 0.00 0.00 0.00 0.00 0.00 00:07:11.094 00:07:11.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.660 Nvme0n1 : 5.00 23924.80 93.46 0.00 0.00 0.00 0.00 0.00 00:07:11.660 [2024-12-12T09:20:45.683Z] =================================================================================================================== 00:07:11.660 [2024-12-12T09:20:45.683Z] Total : 23924.80 93.46 0.00 0.00 0.00 0.00 0.00 00:07:11.660 00:07:13.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.034 Nvme0n1 : 6.00 23950.33 93.56 0.00 0.00 0.00 0.00 0.00 00:07:13.034 [2024-12-12T09:20:47.057Z] =================================================================================================================== 00:07:13.034 [2024-12-12T09:20:47.057Z] Total : 23950.33 93.56 0.00 0.00 0.00 0.00 0.00 00:07:13.034 00:07:13.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.970 Nvme0n1 : 7.00 23965.57 93.62 0.00 0.00 0.00 0.00 0.00 00:07:13.970 [2024-12-12T09:20:47.993Z] =================================================================================================================== 00:07:13.970 [2024-12-12T09:20:47.993Z] Total : 23965.57 93.62 0.00 0.00 0.00 0.00 0.00 00:07:13.970 00:07:14.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.906 Nvme0n1 : 8.00 23981.50 93.68 0.00 0.00 0.00 0.00 0.00 00:07:14.906 [2024-12-12T09:20:48.929Z] =================================================================================================================== 00:07:14.906 [2024-12-12T09:20:48.929Z] Total : 23981.50 93.68 0.00 0.00 0.00 0.00 0.00 00:07:14.906 00:07:15.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.842 Nvme0n1 : 9.00 23997.44 93.74 0.00 0.00 0.00 0.00 0.00 00:07:15.842 [2024-12-12T09:20:49.865Z] =================================================================================================================== 00:07:15.842 [2024-12-12T09:20:49.865Z] Total : 23997.44 93.74 0.00 0.00 0.00 0.00 0.00 00:07:15.842 00:07:16.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.776 Nvme0n1 : 10.00 23973.80 93.65 0.00 0.00 0.00 0.00 0.00 00:07:16.776 [2024-12-12T09:20:50.799Z] =================================================================================================================== 00:07:16.776 [2024-12-12T09:20:50.799Z] Total : 23973.80 93.65 0.00 0.00 0.00 0.00 0.00 00:07:16.776 00:07:16.776 00:07:16.776 Latency(us) 00:07:16.776 [2024-12-12T09:20:50.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.776 Nvme0n1 : 10.00 23971.91 93.64 0.00 0.00 5336.29 3183.18 14043.43 00:07:16.776 [2024-12-12T09:20:50.799Z] =================================================================================================================== 00:07:16.776 [2024-12-12T09:20:50.799Z] Total : 23971.91 93.64 0.00 0.00 5336.29 3183.18 14043.43 00:07:16.776 { 00:07:16.776 "results": [ 00:07:16.776 { 00:07:16.776 "job": "Nvme0n1", 00:07:16.776 "core_mask": "0x2", 00:07:16.776 "workload": "randwrite", 00:07:16.776 "status": "finished", 00:07:16.776 "queue_depth": 128, 00:07:16.776 "io_size": 4096, 00:07:16.776 "runtime": 10.003459, 00:07:16.776 "iops": 23971.908116982337, 00:07:16.776 "mibps": 93.64026608196225, 00:07:16.776 "io_failed": 0, 00:07:16.776 "io_timeout": 0, 00:07:16.776 "avg_latency_us": 5336.291055740033, 00:07:16.776 "min_latency_us": 3183.177142857143, 00:07:16.776 "max_latency_us": 14043.42857142857 00:07:16.776 } 00:07:16.776 ], 00:07:16.776 "core_count": 1 00:07:16.776 } 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1373265 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1373265 ']' 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1373265 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1373265 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1373265' 00:07:16.776 killing process with pid 1373265 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1373265 00:07:16.776 Received shutdown signal, test time was about 10.000000 seconds 00:07:16.776 00:07:16.776 Latency(us) 00:07:16.776 [2024-12-12T09:20:50.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.776 [2024-12-12T09:20:50.799Z] =================================================================================================================== 00:07:16.776 [2024-12-12T09:20:50.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:16.776 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1373265 00:07:17.035 10:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.294 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1370240 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1370240 00:07:17.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1370240 Killed "${NVMF_APP[@]}" "$@" 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1375284 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1375284 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1375284 ']' 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.553 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:17.812 [2024-12-12 10:20:51.596989] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:17.812 [2024-12-12 10:20:51.597033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.812 [2024-12-12 10:20:51.672232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.812 [2024-12-12 10:20:51.712108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.812 [2024-12-12 10:20:51.712141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.812 [2024-12-12 10:20:51.712148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.812 [2024-12-12 10:20:51.712154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.812 [2024-12-12 10:20:51.712159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.812 [2024-12-12 10:20:51.712689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.812 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.812 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:17.812 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.812 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.812 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:18.070 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.070 10:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.070 [2024-12-12 10:20:52.007539] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:18.070 [2024-12-12 10:20:52.007640] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:18.070 [2024-12-12 10:20:52.007670] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:18.070 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:18.070 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5ac58159-f742-45b4-a301-b2fafcbe5da5 00:07:18.070 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5ac58159-f742-45b4-a301-b2fafcbe5da5 00:07:18.070 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:18.071 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:18.071 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:18.071 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:18.071 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:18.329 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ac58159-f742-45b4-a301-b2fafcbe5da5 -t 2000 00:07:18.588 [ 00:07:18.588 { 00:07:18.588 "name": "5ac58159-f742-45b4-a301-b2fafcbe5da5", 00:07:18.588 "aliases": [ 00:07:18.588 "lvs/lvol" 00:07:18.588 ], 00:07:18.588 "product_name": "Logical Volume", 00:07:18.588 "block_size": 4096, 00:07:18.588 "num_blocks": 38912, 00:07:18.588 "uuid": "5ac58159-f742-45b4-a301-b2fafcbe5da5", 00:07:18.588 "assigned_rate_limits": { 00:07:18.588 "rw_ios_per_sec": 0, 00:07:18.588 "rw_mbytes_per_sec": 0, 00:07:18.588 "r_mbytes_per_sec": 0, 00:07:18.588 "w_mbytes_per_sec": 0 00:07:18.588 }, 00:07:18.588 "claimed": false, 00:07:18.588 "zoned": false, 00:07:18.588 "supported_io_types": { 00:07:18.588 "read": true, 00:07:18.588 "write": true, 00:07:18.588 "unmap": true, 00:07:18.588 "flush": false, 00:07:18.588 "reset": true, 00:07:18.588 "nvme_admin": false, 00:07:18.588 "nvme_io": false, 00:07:18.588 "nvme_io_md": false, 00:07:18.588 "write_zeroes": true, 00:07:18.588 "zcopy": false, 00:07:18.588 "get_zone_info": false, 00:07:18.588 "zone_management": false, 00:07:18.588 "zone_append": false, 00:07:18.588 "compare": false, 00:07:18.588 "compare_and_write": false, 00:07:18.588 "abort": false, 00:07:18.588 "seek_hole": true, 00:07:18.588 "seek_data": true, 00:07:18.588 "copy": false, 00:07:18.588 "nvme_iov_md": false 00:07:18.588 }, 00:07:18.588 "driver_specific": { 00:07:18.588 "lvol": { 00:07:18.588 "lvol_store_uuid": "b5d828d0-9c6e-4525-a243-a7fc4d9eb726", 00:07:18.588 "base_bdev": "aio_bdev", 00:07:18.588 "thin_provision": false, 00:07:18.588 "num_allocated_clusters": 38, 00:07:18.588 "snapshot": false, 00:07:18.588 "clone": false, 00:07:18.588 "esnap_clone": false 00:07:18.588 } 00:07:18.588 } 00:07:18.588 } 00:07:18.588 ] 00:07:18.588 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:18.588 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:18.588 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:18.588 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:18.588 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:18.588 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:18.847 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:18.847 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:19.106 [2024-12-12 10:20:52.948637] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:19.106 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:19.364 request: 00:07:19.364 { 00:07:19.364 "uuid": "b5d828d0-9c6e-4525-a243-a7fc4d9eb726", 00:07:19.364 "method": "bdev_lvol_get_lvstores", 00:07:19.364 "req_id": 1 00:07:19.364 } 00:07:19.364 Got JSON-RPC error response 00:07:19.364 response: 00:07:19.364 { 00:07:19.364 "code": -19, 00:07:19.364 "message": "No such device" 00:07:19.364 } 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:19.364 aio_bdev 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5ac58159-f742-45b4-a301-b2fafcbe5da5 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5ac58159-f742-45b4-a301-b2fafcbe5da5 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.364 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:19.622 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ac58159-f742-45b4-a301-b2fafcbe5da5 -t 2000 00:07:19.881 [ 00:07:19.881 { 00:07:19.881 "name": "5ac58159-f742-45b4-a301-b2fafcbe5da5", 00:07:19.881 "aliases": [ 00:07:19.881 "lvs/lvol" 00:07:19.881 ], 00:07:19.881 "product_name": "Logical Volume", 00:07:19.881 "block_size": 4096, 00:07:19.881 "num_blocks": 38912, 00:07:19.881 "uuid": "5ac58159-f742-45b4-a301-b2fafcbe5da5", 00:07:19.881 "assigned_rate_limits": { 00:07:19.881 "rw_ios_per_sec": 0, 00:07:19.881 "rw_mbytes_per_sec": 0, 00:07:19.881 "r_mbytes_per_sec": 0, 00:07:19.881 "w_mbytes_per_sec": 0 00:07:19.881 }, 00:07:19.881 "claimed": false, 00:07:19.881 "zoned": false, 00:07:19.881 "supported_io_types": { 00:07:19.881 "read": true, 00:07:19.881 "write": true, 00:07:19.881 "unmap": true, 00:07:19.881 "flush": false, 00:07:19.881 "reset": true, 00:07:19.881 "nvme_admin": false, 00:07:19.881 "nvme_io": false, 00:07:19.881 "nvme_io_md": false, 00:07:19.881 "write_zeroes": true, 00:07:19.881 "zcopy": false, 00:07:19.881 "get_zone_info": false, 00:07:19.881 "zone_management": false, 00:07:19.881 "zone_append": false, 00:07:19.881 "compare": false, 00:07:19.881 "compare_and_write": false, 00:07:19.881 "abort": false, 00:07:19.881 "seek_hole": true, 00:07:19.881 "seek_data": true, 00:07:19.881 "copy": false, 00:07:19.881 "nvme_iov_md": false 00:07:19.881 }, 00:07:19.881 "driver_specific": { 00:07:19.881 "lvol": { 00:07:19.881 "lvol_store_uuid": "b5d828d0-9c6e-4525-a243-a7fc4d9eb726", 00:07:19.881 "base_bdev": "aio_bdev", 00:07:19.881 "thin_provision": false, 00:07:19.881 "num_allocated_clusters": 38, 00:07:19.881 "snapshot": false, 00:07:19.881 "clone": false, 00:07:19.881 "esnap_clone": false 00:07:19.881 } 00:07:19.881 } 00:07:19.881 } 00:07:19.881 ] 00:07:19.881 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:19.881 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:19.881 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:20.139 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:20.139 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:20.139 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:20.139 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:20.139 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ac58159-f742-45b4-a301-b2fafcbe5da5 00:07:20.398 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5d828d0-9c6e-4525-a243-a7fc4d9eb726 00:07:20.656 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:20.656 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:20.915 00:07:20.915 real 0m16.749s 00:07:20.915 user 0m43.869s 00:07:20.915 sys 0m3.579s 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:20.915 ************************************ 00:07:20.915 END TEST lvs_grow_dirty 00:07:20.915 ************************************ 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:20.915 nvmf_trace.0 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.915 rmmod nvme_tcp 00:07:20.915 rmmod nvme_fabrics 00:07:20.915 rmmod nvme_keyring 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1375284 ']' 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1375284 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1375284 ']' 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1375284 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1375284 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1375284' 00:07:20.915 killing process with pid 1375284 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1375284 00:07:20.915 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1375284 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.175 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:23.710 00:07:23.710 real 0m41.695s 00:07:23.710 user 1m4.717s 00:07:23.710 sys 0m9.954s 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.710 ************************************ 00:07:23.710 END TEST nvmf_lvs_grow 00:07:23.710 ************************************ 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.710 ************************************ 00:07:23.710 START TEST nvmf_bdev_io_wait 00:07:23.710 ************************************ 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:23.710 * Looking for test storage... 00:07:23.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:23.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.710 --rc genhtml_branch_coverage=1 00:07:23.710 --rc genhtml_function_coverage=1 00:07:23.710 --rc genhtml_legend=1 00:07:23.710 --rc geninfo_all_blocks=1 00:07:23.710 --rc geninfo_unexecuted_blocks=1 00:07:23.710 00:07:23.710 ' 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:23.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.710 --rc genhtml_branch_coverage=1 00:07:23.710 --rc genhtml_function_coverage=1 00:07:23.710 --rc genhtml_legend=1 00:07:23.710 --rc geninfo_all_blocks=1 00:07:23.710 --rc geninfo_unexecuted_blocks=1 00:07:23.710 00:07:23.710 ' 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:23.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.710 --rc genhtml_branch_coverage=1 00:07:23.710 --rc genhtml_function_coverage=1 00:07:23.710 --rc genhtml_legend=1 00:07:23.710 --rc geninfo_all_blocks=1 00:07:23.710 --rc geninfo_unexecuted_blocks=1 00:07:23.710 00:07:23.710 ' 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:23.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.710 --rc genhtml_branch_coverage=1 00:07:23.710 --rc genhtml_function_coverage=1 00:07:23.710 --rc genhtml_legend=1 00:07:23.710 --rc geninfo_all_blocks=1 00:07:23.710 --rc geninfo_unexecuted_blocks=1 00:07:23.710 00:07:23.710 ' 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.710 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:23.711 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:30.279 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:30.279 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:30.279 Found net devices under 0000:af:00.0: cvl_0_0 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.279 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:30.280 Found net devices under 0000:af:00.1: cvl_0_1 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:07:30.280 00:07:30.280 --- 10.0.0.2 ping statistics --- 00:07:30.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.280 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:07:30.280 00:07:30.280 --- 10.0.0.1 ping statistics --- 00:07:30.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.280 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1379454 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1379454 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1379454 ']' 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.280 [2024-12-12 10:21:03.467480] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:30.280 [2024-12-12 10:21:03.467528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.280 [2024-12-12 10:21:03.546617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.280 [2024-12-12 10:21:03.589080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.280 [2024-12-12 10:21:03.589116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.280 [2024-12-12 10:21:03.589123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.280 [2024-12-12 10:21:03.589129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.280 [2024-12-12 10:21:03.589133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.280 [2024-12-12 10:21:03.590616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.280 [2024-12-12 10:21:03.590649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.280 [2024-12-12 10:21:03.590761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.280 [2024-12-12 10:21:03.590762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.280 [2024-12-12 10:21:03.718523] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.280 Malloc0 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.280 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.281 [2024-12-12 10:21:03.761556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1379640 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1379642 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1379643 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1379645 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.281 { 00:07:30.281 "params": { 00:07:30.281 "name": "Nvme$subsystem", 00:07:30.281 "trtype": "$TEST_TRANSPORT", 00:07:30.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.281 "adrfam": "ipv4", 00:07:30.281 "trsvcid": "$NVMF_PORT", 00:07:30.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.281 "hdgst": ${hdgst:-false}, 00:07:30.281 "ddgst": ${ddgst:-false} 00:07:30.281 }, 00:07:30.281 "method": "bdev_nvme_attach_controller" 00:07:30.281 } 00:07:30.281 EOF 00:07:30.281 )") 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.281 { 00:07:30.281 "params": { 00:07:30.281 "name": "Nvme$subsystem", 00:07:30.281 "trtype": "$TEST_TRANSPORT", 00:07:30.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.281 "adrfam": "ipv4", 00:07:30.281 "trsvcid": "$NVMF_PORT", 00:07:30.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.281 "hdgst": ${hdgst:-false}, 00:07:30.281 "ddgst": ${ddgst:-false} 00:07:30.281 }, 00:07:30.281 "method": "bdev_nvme_attach_controller" 00:07:30.281 } 00:07:30.281 EOF 00:07:30.281 )") 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.281 { 00:07:30.281 "params": { 00:07:30.281 "name": "Nvme$subsystem", 00:07:30.281 "trtype": "$TEST_TRANSPORT", 00:07:30.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.281 "adrfam": "ipv4", 00:07:30.281 "trsvcid": "$NVMF_PORT", 00:07:30.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.281 "hdgst": ${hdgst:-false}, 00:07:30.281 "ddgst": ${ddgst:-false} 00:07:30.281 }, 00:07:30.281 "method": "bdev_nvme_attach_controller" 00:07:30.281 } 00:07:30.281 EOF 00:07:30.281 )") 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.281 { 00:07:30.281 "params": { 00:07:30.281 "name": "Nvme$subsystem", 00:07:30.281 "trtype": "$TEST_TRANSPORT", 00:07:30.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.281 "adrfam": "ipv4", 00:07:30.281 "trsvcid": "$NVMF_PORT", 00:07:30.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.281 "hdgst": ${hdgst:-false}, 00:07:30.281 "ddgst": ${ddgst:-false} 00:07:30.281 }, 00:07:30.281 "method": "bdev_nvme_attach_controller" 00:07:30.281 } 00:07:30.281 EOF 00:07:30.281 )") 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1379640 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.281 "params": { 00:07:30.281 "name": "Nvme1", 00:07:30.281 "trtype": "tcp", 00:07:30.281 "traddr": "10.0.0.2", 00:07:30.281 "adrfam": "ipv4", 00:07:30.281 "trsvcid": "4420", 00:07:30.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.281 "hdgst": false, 00:07:30.281 "ddgst": false 00:07:30.281 }, 00:07:30.281 "method": "bdev_nvme_attach_controller" 00:07:30.281 }' 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.281 "params": { 00:07:30.281 "name": "Nvme1", 00:07:30.281 "trtype": "tcp", 00:07:30.281 "traddr": "10.0.0.2", 00:07:30.281 "adrfam": "ipv4", 00:07:30.281 "trsvcid": "4420", 00:07:30.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.281 "hdgst": false, 00:07:30.281 "ddgst": false 00:07:30.281 }, 00:07:30.281 "method": "bdev_nvme_attach_controller" 00:07:30.281 }' 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.281 "params": { 00:07:30.281 "name": "Nvme1", 00:07:30.281 "trtype": "tcp", 00:07:30.281 "traddr": "10.0.0.2", 00:07:30.281 "adrfam": "ipv4", 00:07:30.281 "trsvcid": "4420", 00:07:30.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.281 "hdgst": false, 00:07:30.281 "ddgst": false 00:07:30.281 }, 00:07:30.281 "method": "bdev_nvme_attach_controller" 00:07:30.281 }' 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:30.281 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.281 "params": { 00:07:30.281 "name": "Nvme1", 00:07:30.281 "trtype": "tcp", 00:07:30.281 "traddr": "10.0.0.2", 00:07:30.281 "adrfam": "ipv4", 00:07:30.281 "trsvcid": "4420", 00:07:30.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.281 "hdgst": false, 00:07:30.281 "ddgst": false 00:07:30.281 }, 00:07:30.281 "method": "bdev_nvme_attach_controller" 00:07:30.281 }' 00:07:30.281 [2024-12-12 10:21:03.813808] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:30.281 [2024-12-12 10:21:03.813805] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:30.281 [2024-12-12 10:21:03.813804] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:30.281 [2024-12-12 10:21:03.813859] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-12-12 10:21:03.813860] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-12 10:21:03.813860] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:07:30.281 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:30.281 --proc-type=auto ] 00:07:30.281 [2024-12-12 10:21:03.816667] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:30.282 [2024-12-12 10:21:03.816713] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:30.282 [2024-12-12 10:21:04.001770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.282 [2024-12-12 10:21:04.046753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:30.282 [2024-12-12 10:21:04.100907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.282 [2024-12-12 10:21:04.145067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:07:30.282 [2024-12-12 10:21:04.193254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.282 [2024-12-12 10:21:04.254163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:07:30.282 [2024-12-12 10:21:04.254753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.282 [2024-12-12 10:21:04.296437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:07:30.540 Running I/O for 1 seconds... 00:07:30.540 Running I/O for 1 seconds... 00:07:30.540 Running I/O for 1 seconds... 00:07:30.799 Running I/O for 1 seconds... 00:07:31.365 8718.00 IOPS, 34.05 MiB/s 00:07:31.365 Latency(us) 00:07:31.365 [2024-12-12T09:21:05.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.365 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:31.365 Nvme1n1 : 1.02 8724.91 34.08 0.00 0.00 14588.88 6303.94 23592.96 00:07:31.365 [2024-12-12T09:21:05.389Z] =================================================================================================================== 00:07:31.366 [2024-12-12T09:21:05.389Z] Total : 8724.91 34.08 0.00 0.00 14588.88 6303.94 23592.96 00:07:31.624 12372.00 IOPS, 48.33 MiB/s [2024-12-12T09:21:05.647Z] 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1379642 00:07:31.624 00:07:31.624 Latency(us) 00:07:31.624 [2024-12-12T09:21:05.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.624 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:31.624 Nvme1n1 : 1.01 12431.18 48.56 0.00 0.00 10264.13 4837.18 21595.67 00:07:31.624 [2024-12-12T09:21:05.647Z] =================================================================================================================== 00:07:31.624 [2024-12-12T09:21:05.647Z] Total : 12431.18 48.56 0.00 0.00 10264.13 4837.18 21595.67 00:07:31.624 8028.00 IOPS, 31.36 MiB/s 00:07:31.624 Latency(us) 00:07:31.624 [2024-12-12T09:21:05.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.624 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:31.624 Nvme1n1 : 1.01 8124.78 31.74 0.00 0.00 15716.14 3588.88 36450.50 00:07:31.624 [2024-12-12T09:21:05.647Z] =================================================================================================================== 00:07:31.624 [2024-12-12T09:21:05.647Z] Total : 8124.78 31.74 0.00 0.00 15716.14 3588.88 36450.50 00:07:31.624 244696.00 IOPS, 955.84 MiB/s 00:07:31.624 Latency(us) 00:07:31.624 [2024-12-12T09:21:05.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.624 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:31.624 Nvme1n1 : 1.00 244323.59 954.39 0.00 0.00 521.01 226.26 1521.37 00:07:31.624 [2024-12-12T09:21:05.647Z] =================================================================================================================== 00:07:31.624 [2024-12-12T09:21:05.647Z] Total : 244323.59 954.39 0.00 0.00 521.01 226.26 1521.37 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1379643 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1379645 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:31.883 rmmod nvme_tcp 00:07:31.883 rmmod nvme_fabrics 00:07:31.883 rmmod nvme_keyring 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1379454 ']' 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1379454 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1379454 ']' 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1379454 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379454 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379454' 00:07:31.883 killing process with pid 1379454 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1379454 00:07:31.883 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1379454 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.142 10:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.045 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:34.045 00:07:34.045 real 0m10.848s 00:07:34.045 user 0m16.445s 00:07:34.045 sys 0m6.251s 00:07:34.045 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.045 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:34.045 ************************************ 00:07:34.045 END TEST nvmf_bdev_io_wait 00:07:34.045 ************************************ 00:07:34.304 10:21:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:34.304 10:21:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.304 10:21:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.304 10:21:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.304 ************************************ 00:07:34.304 START TEST nvmf_queue_depth 00:07:34.304 ************************************ 00:07:34.304 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:34.304 * Looking for test storage... 00:07:34.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.304 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:34.304 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:34.304 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:34.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.305 --rc genhtml_branch_coverage=1 00:07:34.305 --rc genhtml_function_coverage=1 00:07:34.305 --rc genhtml_legend=1 00:07:34.305 --rc geninfo_all_blocks=1 00:07:34.305 --rc geninfo_unexecuted_blocks=1 00:07:34.305 00:07:34.305 ' 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:34.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.305 --rc genhtml_branch_coverage=1 00:07:34.305 --rc genhtml_function_coverage=1 00:07:34.305 --rc genhtml_legend=1 00:07:34.305 --rc geninfo_all_blocks=1 00:07:34.305 --rc geninfo_unexecuted_blocks=1 00:07:34.305 00:07:34.305 ' 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:34.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.305 --rc genhtml_branch_coverage=1 00:07:34.305 --rc genhtml_function_coverage=1 00:07:34.305 --rc genhtml_legend=1 00:07:34.305 --rc geninfo_all_blocks=1 00:07:34.305 --rc geninfo_unexecuted_blocks=1 00:07:34.305 00:07:34.305 ' 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:34.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.305 --rc genhtml_branch_coverage=1 00:07:34.305 --rc genhtml_function_coverage=1 00:07:34.305 --rc genhtml_legend=1 00:07:34.305 --rc geninfo_all_blocks=1 00:07:34.305 --rc geninfo_unexecuted_blocks=1 00:07:34.305 00:07:34.305 ' 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.305 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:34.565 10:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.131 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.131 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.131 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.131 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.131 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:41.132 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:41.132 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:41.132 Found net devices under 0000:af:00.0: cvl_0_0 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:41.132 Found net devices under 0000:af:00.1: cvl_0_1 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.132 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:41.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:07:41.132 00:07:41.132 --- 10.0.0.2 ping statistics --- 00:07:41.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.132 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:07:41.132 00:07:41.132 --- 10.0.0.1 ping statistics --- 00:07:41.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.132 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.132 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1383765 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1383765 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1383765 ']' 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 [2024-12-12 10:21:14.303270] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:41.133 [2024-12-12 10:21:14.303311] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.133 [2024-12-12 10:21:14.383106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.133 [2024-12-12 10:21:14.420827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.133 [2024-12-12 10:21:14.420862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.133 [2024-12-12 10:21:14.420869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.133 [2024-12-12 10:21:14.420874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.133 [2024-12-12 10:21:14.420879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.133 [2024-12-12 10:21:14.421379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 [2024-12-12 10:21:14.569158] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 Malloc0 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 [2024-12-12 10:21:14.619397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1383965 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1383965 /var/tmp/bdevperf.sock 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1383965 ']' 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:41.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 [2024-12-12 10:21:14.669038] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:07:41.133 [2024-12-12 10:21:14.669077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383965 ] 00:07:41.133 [2024-12-12 10:21:14.742116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.133 [2024-12-12 10:21:14.784288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.133 NVMe0n1 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.133 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:41.133 Running I/O for 10 seconds... 00:07:43.443 12014.00 IOPS, 46.93 MiB/s [2024-12-12T09:21:18.402Z] 12283.00 IOPS, 47.98 MiB/s [2024-12-12T09:21:19.338Z] 12291.00 IOPS, 48.01 MiB/s [2024-12-12T09:21:20.401Z] 12415.50 IOPS, 48.50 MiB/s [2024-12-12T09:21:21.394Z] 12474.20 IOPS, 48.73 MiB/s [2024-12-12T09:21:22.330Z] 12447.33 IOPS, 48.62 MiB/s [2024-12-12T09:21:23.266Z] 12433.86 IOPS, 48.57 MiB/s [2024-12-12T09:21:24.202Z] 12453.50 IOPS, 48.65 MiB/s [2024-12-12T09:21:25.139Z] 12495.00 IOPS, 48.81 MiB/s [2024-12-12T09:21:25.398Z] 12478.50 IOPS, 48.74 MiB/s 00:07:51.375 Latency(us) 00:07:51.375 [2024-12-12T09:21:25.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.375 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:51.375 Verification LBA range: start 0x0 length 0x4000 00:07:51.375 NVMe0n1 : 10.05 12513.13 48.88 0.00 0.00 81581.77 18225.25 52428.80 00:07:51.375 [2024-12-12T09:21:25.398Z] =================================================================================================================== 00:07:51.375 [2024-12-12T09:21:25.398Z] Total : 12513.13 48.88 0.00 0.00 81581.77 18225.25 52428.80 00:07:51.375 { 00:07:51.375 "results": [ 00:07:51.375 { 00:07:51.375 "job": "NVMe0n1", 00:07:51.375 "core_mask": "0x1", 00:07:51.375 "workload": "verify", 00:07:51.375 "status": "finished", 00:07:51.375 "verify_range": { 00:07:51.375 "start": 0, 00:07:51.375 "length": 16384 00:07:51.375 }, 00:07:51.375 "queue_depth": 1024, 00:07:51.375 "io_size": 4096, 00:07:51.375 "runtime": 10.054157, 00:07:51.375 "iops": 12513.132627628553, 00:07:51.375 "mibps": 48.879424326674034, 00:07:51.375 "io_failed": 0, 00:07:51.375 "io_timeout": 0, 00:07:51.375 "avg_latency_us": 81581.77354676345, 00:07:51.375 "min_latency_us": 18225.249523809525, 00:07:51.375 "max_latency_us": 52428.8 00:07:51.375 } 00:07:51.375 ], 00:07:51.375 "core_count": 1 00:07:51.375 } 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1383965 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1383965 ']' 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1383965 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383965 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383965' 00:07:51.375 killing process with pid 1383965 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1383965 00:07:51.375 Received shutdown signal, test time was about 10.000000 seconds 00:07:51.375 00:07:51.375 Latency(us) 00:07:51.375 [2024-12-12T09:21:25.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.375 [2024-12-12T09:21:25.398Z] =================================================================================================================== 00:07:51.375 [2024-12-12T09:21:25.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1383965 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.375 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.634 rmmod nvme_tcp 00:07:51.634 rmmod nvme_fabrics 00:07:51.634 rmmod nvme_keyring 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1383765 ']' 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1383765 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1383765 ']' 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1383765 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383765 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383765' 00:07:51.634 killing process with pid 1383765 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1383765 00:07:51.634 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1383765 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.892 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.795 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:53.795 00:07:53.795 real 0m19.637s 00:07:53.795 user 0m22.899s 00:07:53.795 sys 0m6.118s 00:07:53.795 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.795 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.795 ************************************ 00:07:53.795 END TEST nvmf_queue_depth 00:07:53.795 ************************************ 00:07:53.795 10:21:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:53.795 10:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.795 10:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.795 10:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.054 ************************************ 00:07:54.054 START TEST nvmf_target_multipath 00:07:54.054 ************************************ 00:07:54.054 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:54.054 * Looking for test storage... 00:07:54.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.054 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:54.054 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:07:54.054 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:54.054 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:54.054 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:54.055 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.055 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:54.056 10:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:00.623 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:00.623 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.623 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:00.624 Found net devices under 0000:af:00.0: cvl_0_0 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:00.624 Found net devices under 0000:af:00.1: cvl_0_1 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:08:00.624 00:08:00.624 --- 10.0.0.2 ping statistics --- 00:08:00.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.624 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:08:00.624 00:08:00.624 --- 10.0.0.1 ping statistics --- 00:08:00.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.624 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:00.624 only one NIC for nvmf test 00:08:00.624 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:00.625 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:00.625 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:00.625 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:00.625 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:00.625 rmmod nvme_tcp 00:08:00.625 rmmod nvme_fabrics 00:08:00.625 rmmod nvme_keyring 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.625 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.531 00:08:02.531 real 0m8.336s 00:08:02.531 user 0m1.800s 00:08:02.531 sys 0m4.559s 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:02.531 ************************************ 00:08:02.531 END TEST nvmf_target_multipath 00:08:02.531 ************************************ 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.531 ************************************ 00:08:02.531 START TEST nvmf_zcopy 00:08:02.531 ************************************ 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:02.531 * Looking for test storage... 00:08:02.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.531 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:02.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.532 --rc genhtml_branch_coverage=1 00:08:02.532 --rc genhtml_function_coverage=1 00:08:02.532 --rc genhtml_legend=1 00:08:02.532 --rc geninfo_all_blocks=1 00:08:02.532 --rc geninfo_unexecuted_blocks=1 00:08:02.532 00:08:02.532 ' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:02.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.532 --rc genhtml_branch_coverage=1 00:08:02.532 --rc genhtml_function_coverage=1 00:08:02.532 --rc genhtml_legend=1 00:08:02.532 --rc geninfo_all_blocks=1 00:08:02.532 --rc geninfo_unexecuted_blocks=1 00:08:02.532 00:08:02.532 ' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:02.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.532 --rc genhtml_branch_coverage=1 00:08:02.532 --rc genhtml_function_coverage=1 00:08:02.532 --rc genhtml_legend=1 00:08:02.532 --rc geninfo_all_blocks=1 00:08:02.532 --rc geninfo_unexecuted_blocks=1 00:08:02.532 00:08:02.532 ' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:02.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.532 --rc genhtml_branch_coverage=1 00:08:02.532 --rc genhtml_function_coverage=1 00:08:02.532 --rc genhtml_legend=1 00:08:02.532 --rc geninfo_all_blocks=1 00:08:02.532 --rc geninfo_unexecuted_blocks=1 00:08:02.532 00:08:02.532 ' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.532 10:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:09.101 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:09.101 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.101 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:09.102 Found net devices under 0000:af:00.0: cvl_0_0 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:09.102 Found net devices under 0000:af:00.1: cvl_0_1 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:08:09.102 00:08:09.102 --- 10.0.0.2 ping statistics --- 00:08:09.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.102 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:08:09.102 00:08:09.102 --- 10.0.0.1 ping statistics --- 00:08:09.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.102 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1392735 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1392735 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1392735 ']' 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 [2024-12-12 10:21:42.468625] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:09.102 [2024-12-12 10:21:42.468674] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.102 [2024-12-12 10:21:42.545709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.102 [2024-12-12 10:21:42.584251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.102 [2024-12-12 10:21:42.584284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.102 [2024-12-12 10:21:42.584291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.102 [2024-12-12 10:21:42.584297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.102 [2024-12-12 10:21:42.584302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.102 [2024-12-12 10:21:42.584757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 [2024-12-12 10:21:42.719929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.103 [2024-12-12 10:21:42.744111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.103 malloc0 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:09.103 { 00:08:09.103 "params": { 00:08:09.103 "name": "Nvme$subsystem", 00:08:09.103 "trtype": "$TEST_TRANSPORT", 00:08:09.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:09.103 "adrfam": "ipv4", 00:08:09.103 "trsvcid": "$NVMF_PORT", 00:08:09.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:09.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:09.103 "hdgst": ${hdgst:-false}, 00:08:09.103 "ddgst": ${ddgst:-false} 00:08:09.103 }, 00:08:09.103 "method": "bdev_nvme_attach_controller" 00:08:09.103 } 00:08:09.103 EOF 00:08:09.103 )") 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:09.103 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:09.103 "params": { 00:08:09.103 "name": "Nvme1", 00:08:09.103 "trtype": "tcp", 00:08:09.103 "traddr": "10.0.0.2", 00:08:09.103 "adrfam": "ipv4", 00:08:09.103 "trsvcid": "4420", 00:08:09.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:09.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:09.103 "hdgst": false, 00:08:09.103 "ddgst": false 00:08:09.103 }, 00:08:09.103 "method": "bdev_nvme_attach_controller" 00:08:09.103 }' 00:08:09.103 [2024-12-12 10:21:42.829584] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:09.103 [2024-12-12 10:21:42.829625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392755 ] 00:08:09.103 [2024-12-12 10:21:42.900205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.103 [2024-12-12 10:21:42.942403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.362 Running I/O for 10 seconds... 00:08:11.676 8682.00 IOPS, 67.83 MiB/s [2024-12-12T09:21:46.635Z] 8768.00 IOPS, 68.50 MiB/s [2024-12-12T09:21:47.570Z] 8781.33 IOPS, 68.60 MiB/s [2024-12-12T09:21:48.504Z] 8818.00 IOPS, 68.89 MiB/s [2024-12-12T09:21:49.439Z] 8822.00 IOPS, 68.92 MiB/s [2024-12-12T09:21:50.373Z] 8824.00 IOPS, 68.94 MiB/s [2024-12-12T09:21:51.307Z] 8803.29 IOPS, 68.78 MiB/s [2024-12-12T09:21:52.684Z] 8796.00 IOPS, 68.72 MiB/s [2024-12-12T09:21:53.619Z] 8797.33 IOPS, 68.73 MiB/s [2024-12-12T09:21:53.619Z] 8797.00 IOPS, 68.73 MiB/s 00:08:19.596 Latency(us) 00:08:19.596 [2024-12-12T09:21:53.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.596 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:19.596 Verification LBA range: start 0x0 length 0x1000 00:08:19.596 Nvme1n1 : 10.01 8799.18 68.74 0.00 0.00 14505.30 2200.14 22719.15 00:08:19.596 [2024-12-12T09:21:53.619Z] =================================================================================================================== 00:08:19.596 [2024-12-12T09:21:53.619Z] Total : 8799.18 68.74 0.00 0.00 14505.30 2200.14 22719.15 00:08:19.596 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1394545 00:08:19.596 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:19.596 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.596 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:19.596 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:19.596 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:19.596 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:19.596 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:19.596 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:19.596 { 00:08:19.596 "params": { 00:08:19.596 "name": "Nvme$subsystem", 00:08:19.596 "trtype": "$TEST_TRANSPORT", 00:08:19.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.596 "adrfam": "ipv4", 00:08:19.596 "trsvcid": "$NVMF_PORT", 00:08:19.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.597 "hdgst": ${hdgst:-false}, 00:08:19.597 "ddgst": ${ddgst:-false} 00:08:19.597 }, 00:08:19.597 "method": "bdev_nvme_attach_controller" 00:08:19.597 } 00:08:19.597 EOF 00:08:19.597 )") 00:08:19.597 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:19.597 [2024-12-12 10:21:53.466836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.466868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:19.597 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:19.597 10:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:19.597 "params": { 00:08:19.597 "name": "Nvme1", 00:08:19.597 "trtype": "tcp", 00:08:19.597 "traddr": "10.0.0.2", 00:08:19.597 "adrfam": "ipv4", 00:08:19.597 "trsvcid": "4420", 00:08:19.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.597 "hdgst": false, 00:08:19.597 "ddgst": false 00:08:19.597 }, 00:08:19.597 "method": "bdev_nvme_attach_controller" 00:08:19.597 }' 00:08:19.597 [2024-12-12 10:21:53.478842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.478856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.490873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.490883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.502911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.502925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.505493] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:19.597 [2024-12-12 10:21:53.505535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394545 ] 00:08:19.597 [2024-12-12 10:21:53.514940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.514954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.526968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.526979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.539001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.539011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.551031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.551042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.563060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.563072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.575092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.575103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.579535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.597 [2024-12-12 10:21:53.587124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.587136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.599155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.599169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-12-12 10:21:53.611185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-12-12 10:21:53.611198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-12-12 10:21:53.620617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.855 [2024-12-12 10:21:53.623219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-12-12 10:21:53.623231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-12-12 10:21:53.635255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-12-12 10:21:53.635274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.647288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.647304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.659317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.659339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.671349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.671360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.683384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.683398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.695410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.695421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.707454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.707474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.719488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.719503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.731523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.731538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.743550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.743560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.755583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.755593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.767615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.767625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.779649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.779664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.791680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.791692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.803710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.803721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.815746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.815758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.827781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.827795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.839812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.839822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.851852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.851862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.863886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.863895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.856 [2024-12-12 10:21:53.875912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.856 [2024-12-12 10:21:53.875927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:53.887943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:53.887957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:53.899977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:53.899988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:53.912013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:53.912025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:53.924044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:53.924054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:53.936081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:53.936099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 Running I/O for 5 seconds... 00:08:20.114 [2024-12-12 10:21:53.951145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:53.951165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:53.964868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:53.964889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:53.978351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:53.978369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:53.992627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:53.992646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:54.002963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.114 [2024-12-12 10:21:54.002982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.114 [2024-12-12 10:21:54.016846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.115 [2024-12-12 10:21:54.016867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.115 [2024-12-12 10:21:54.030513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.115 [2024-12-12 10:21:54.030532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.115 [2024-12-12 10:21:54.044085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.115 [2024-12-12 10:21:54.044104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.115 [2024-12-12 10:21:54.057865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.115 [2024-12-12 10:21:54.057885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.115 [2024-12-12 10:21:54.071359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.115 [2024-12-12 10:21:54.071379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.115 [2024-12-12 10:21:54.085622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.115 [2024-12-12 10:21:54.085643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.115 [2024-12-12 10:21:54.101234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.115 [2024-12-12 10:21:54.101255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.115 [2024-12-12 10:21:54.114931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.115 [2024-12-12 10:21:54.114950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.115 [2024-12-12 10:21:54.128132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.115 [2024-12-12 10:21:54.128153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.141663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.141683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.155386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.155407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.168830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.168850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.182536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.182555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.196264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.196284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.210202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.210221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.224035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.224054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.237589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.237608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.251615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.251636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.265640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.265659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.279778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.279797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.293525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.293544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.307506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.307526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.321053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.321074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.335014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.335038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.348845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.348864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.362956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.362975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.376792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.376814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.374 [2024-12-12 10:21:54.390946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.374 [2024-12-12 10:21:54.390965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.404898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.404917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.418713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.418734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.432431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.432450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.446036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.446054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.459542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.459561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.473315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.473339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.487000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.487019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.500689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.500708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.514691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.514710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.528586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.528606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.541681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.541700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.555441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.555461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.569187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.569206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.582803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.582822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.596698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.596717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.610539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.633 [2024-12-12 10:21:54.610558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.633 [2024-12-12 10:21:54.624291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.634 [2024-12-12 10:21:54.624310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.634 [2024-12-12 10:21:54.637983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.634 [2024-12-12 10:21:54.638003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.634 [2024-12-12 10:21:54.651616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.634 [2024-12-12 10:21:54.651635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.665582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.892 [2024-12-12 10:21:54.665617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.679289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.892 [2024-12-12 10:21:54.679309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.693367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.892 [2024-12-12 10:21:54.693387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.704038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.892 [2024-12-12 10:21:54.704057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.718217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.892 [2024-12-12 10:21:54.718236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.731922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.892 [2024-12-12 10:21:54.731943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.746403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.892 [2024-12-12 10:21:54.746422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.757628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.892 [2024-12-12 10:21:54.757647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.771788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.892 [2024-12-12 10:21:54.771807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.892 [2024-12-12 10:21:54.785235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.785254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.893 [2024-12-12 10:21:54.798992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.799010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.893 [2024-12-12 10:21:54.812540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.812564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.893 [2024-12-12 10:21:54.826257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.826276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.893 [2024-12-12 10:21:54.839817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.839836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.893 [2024-12-12 10:21:54.853930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.853948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.893 [2024-12-12 10:21:54.867625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.867644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.893 [2024-12-12 10:21:54.880953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.880973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.893 [2024-12-12 10:21:54.894814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.894844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.893 [2024-12-12 10:21:54.908322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.893 [2024-12-12 10:21:54.908341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.156 [2024-12-12 10:21:54.922895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.156 [2024-12-12 10:21:54.922913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.156 [2024-12-12 10:21:54.937891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.156 [2024-12-12 10:21:54.937910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.156 16864.00 IOPS, 131.75 MiB/s [2024-12-12T09:21:55.179Z] [2024-12-12 10:21:54.951617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.156 [2024-12-12 10:21:54.951636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.156 [2024-12-12 10:21:54.965578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.156 [2024-12-12 10:21:54.965597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.156 [2024-12-12 10:21:54.979052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.157 [2024-12-12 10:21:54.979073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.157 [2024-12-12 10:21:54.992882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.157 [2024-12-12 10:21:54.992900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.157 [2024-12-12 10:21:55.006582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.157 [2024-12-12 10:21:55.006601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.157 [2024-12-12 10:21:55.019868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.157 [2024-12-12 10:21:55.019887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.157 [2024-12-12 10:21:55.033665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.157 [2024-12-12 10:21:55.033684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.157 [2024-12-12 10:21:55.047612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.157 [2024-12-12 10:21:55.047635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.157 [2024-12-12 10:21:55.061409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.157 [2024-12-12 10:21:55.061428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.157 [2024-12-12 10:21:55.075209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.157 [2024-12-12 10:21:55.075228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.157 [2024-12-12 10:21:55.089012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.157 [2024-12-12 10:21:55.089030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.158 [2024-12-12 10:21:55.102701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.158 [2024-12-12 10:21:55.102720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.158 [2024-12-12 10:21:55.116613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.158 [2024-12-12 10:21:55.116633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.158 [2024-12-12 10:21:55.129926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.158 [2024-12-12 10:21:55.129946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.158 [2024-12-12 10:21:55.143491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.158 [2024-12-12 10:21:55.143510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.158 [2024-12-12 10:21:55.156878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.158 [2024-12-12 10:21:55.156896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.158 [2024-12-12 10:21:55.170482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.158 [2024-12-12 10:21:55.170505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.184140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.184159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.198202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.198221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.211677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.211697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.225924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.225942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.239299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.239319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.252837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.252855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.266228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.266249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.280887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.280906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.295993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.296012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.309962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.309982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.323553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.323577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.337234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.337253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.351184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.351204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.364774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.364793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.378332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.378351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.392525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.392545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.406664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.406684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.420319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.420339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.433937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.433961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.425 [2024-12-12 10:21:55.447537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.425 [2024-12-12 10:21:55.447558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.461468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.461488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.474943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.474962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.489078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.489098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.503080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.503099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.516740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.516759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.530593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.530613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.543987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.544007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.558014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.558033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.571655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.571675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.586026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.586046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.597236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.597256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.611419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.611439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.625560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.625588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.639835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.639854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.650175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.650195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.664375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.664395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.678130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.678150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.691939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.691962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.684 [2024-12-12 10:21:55.705834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.684 [2024-12-12 10:21:55.705854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.719605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.719624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.733472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.733492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.747363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.747382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.760851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.760870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.774835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.774854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.788405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.788425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.802308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.802327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.816119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.816137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.829728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.829755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.843497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.843516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.857597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.857616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.871743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.871766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.885759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.885777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.899234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.899253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.912933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.912951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.927027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.927046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 [2024-12-12 10:21:55.940518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.940537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.943 16910.50 IOPS, 132.11 MiB/s [2024-12-12T09:21:55.966Z] [2024-12-12 10:21:55.954330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.943 [2024-12-12 10:21:55.954350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:55.968120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:55.968139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:55.981834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:55.981853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:55.995823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:55.995842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.009901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.009919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.023656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.023674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.037851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.037870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.048822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.048851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.063384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.063402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.076918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.076936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.090955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.090974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.104864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.104883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.118477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.118496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.132468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.132487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.145917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.145936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.159835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.159853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.173550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.173581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.187202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.187220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.201077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.201095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.203 [2024-12-12 10:21:56.214564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.203 [2024-12-12 10:21:56.214591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.228313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.228333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.242182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.242202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.255992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.256011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.270068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.270086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.284234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.284253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.298131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.298150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.311464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.311483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.325514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.325534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.339362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.339381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.353173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.353192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.366836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.366854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.380615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.380633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.394290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.394309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.408109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.408127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.422045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.422065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.436078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.436098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.449514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.449534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.463514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.463537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.462 [2024-12-12 10:21:56.477263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.462 [2024-12-12 10:21:56.477282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.490866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.490885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.504325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.504344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.518215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.518233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.532311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.532331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.543552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.543577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.557705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.557732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.571178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.571197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.585348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.585367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.598881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.598901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.612809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.612838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.626402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.626421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.640146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.640164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.653916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.653934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.667421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.667440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.681090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.681109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.694995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.695014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.708535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.708554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.722528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.722551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.722 [2024-12-12 10:21:56.736363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.722 [2024-12-12 10:21:56.736382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.749957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.749976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.764129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.764148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.777851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.777871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.791943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.791963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.805920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.805939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.817063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.817084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.831329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.831349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.845219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.845238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.858790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.858809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.872934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.872954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.886567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.886594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.900567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.900592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.914531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.914550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.928064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.981 [2024-12-12 10:21:56.928083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.981 [2024-12-12 10:21:56.941977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.982 [2024-12-12 10:21:56.941996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.982 16963.00 IOPS, 132.52 MiB/s [2024-12-12T09:21:57.005Z] [2024-12-12 10:21:56.955741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.982 [2024-12-12 10:21:56.955760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.982 [2024-12-12 10:21:56.969642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.982 [2024-12-12 10:21:56.969662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.982 [2024-12-12 10:21:56.983716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.982 [2024-12-12 10:21:56.983741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.982 [2024-12-12 10:21:56.997419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.982 [2024-12-12 10:21:56.997438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.011693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.011713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.022609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.022628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.036927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.036946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.050632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.050651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.064472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.064493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.077960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.077980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.091846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.091864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.106038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.106057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.116849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.116869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.131042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.131061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.144224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.144242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.158048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.158067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.171901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.171919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.185458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.185477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.199292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.199311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.213582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.213601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.223931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.223949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.241 [2024-12-12 10:21:57.238201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.241 [2024-12-12 10:21:57.238220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.242 [2024-12-12 10:21:57.251431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.242 [2024-12-12 10:21:57.251450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.264953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.264973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.279306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.279323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.293148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.293166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.307019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.307038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.320433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.320453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.334186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.334205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.347951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.347969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.361503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.361522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.374958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.374977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.389077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.389095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.402685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.402704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.416626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.416645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.430429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.430448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.444483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.444503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.457929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.457950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.471496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.471516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.485276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.485295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.498773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.498792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.501 [2024-12-12 10:21:57.512228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.501 [2024-12-12 10:21:57.512247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.526224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.526244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.540184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.540203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.553670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.553689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.567561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.567589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.581211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.581230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.595165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.595184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.609137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.609157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.623153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.623172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.636781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.636799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.651182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.651201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.664430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.664449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.678365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.678384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.691836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.691854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.705933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.705951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.719333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.719352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.733200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.733219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.746825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.746844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.760231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.760250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.760 [2024-12-12 10:21:57.773964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.760 [2024-12-12 10:21:57.773983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.021 [2024-12-12 10:21:57.787682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.021 [2024-12-12 10:21:57.787701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.021 [2024-12-12 10:21:57.798716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.021 [2024-12-12 10:21:57.798734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.021 [2024-12-12 10:21:57.813028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.021 [2024-12-12 10:21:57.813047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.825847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.825866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.839946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.839965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.853434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.853453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.867271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.867290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.880914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.880932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.894679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.894698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.908441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.908460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.921960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.921979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.935686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.935705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.949221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.949240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 16977.75 IOPS, 132.64 MiB/s [2024-12-12T09:21:58.045Z] [2024-12-12 10:21:57.963050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.963069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.976521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.976544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:57.990579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:57.990600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:58.004785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:58.004809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:58.018870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:58.018890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.022 [2024-12-12 10:21:58.032889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.022 [2024-12-12 10:21:58.032907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.048004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.048024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.063712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.063731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.077560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.077584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.091353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.091372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.105133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.105153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.119018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.119037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.132720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.132739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.146246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.146265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.160744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.160763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.176605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.176625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.190442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.190462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.204045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.204065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.217662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.217681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.231400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.231420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.245228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.245248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.258530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.258549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.272322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.272346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.285943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.285962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.299458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.299478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.304 [2024-12-12 10:21:58.312986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.304 [2024-12-12 10:21:58.313005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.326913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.326934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.341126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.341146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.351358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.351378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.365329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.365349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.379507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.379528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.390602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.390622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.405033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.405053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.418858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.418878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.432884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.432902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.446721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.446742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.459906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.459925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.473733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.473754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.487667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.487688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.501630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.501650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.515279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.515298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.529069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.529097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.543264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.543283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.553850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.553868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.567974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.567992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.581821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.581840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.592583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.592603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.606859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.606879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.620769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.620788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.620 [2024-12-12 10:21:58.634279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.620 [2024-12-12 10:21:58.634297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.879 [2024-12-12 10:21:58.648359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.879 [2024-12-12 10:21:58.648379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.879 [2024-12-12 10:21:58.662256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.879 [2024-12-12 10:21:58.662275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.879 [2024-12-12 10:21:58.676256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.879 [2024-12-12 10:21:58.676275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.879 [2024-12-12 10:21:58.690309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.879 [2024-12-12 10:21:58.690328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.879 [2024-12-12 10:21:58.703912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.879 [2024-12-12 10:21:58.703931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.879 [2024-12-12 10:21:58.718376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.879 [2024-12-12 10:21:58.718395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.879 [2024-12-12 10:21:58.730273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.879 [2024-12-12 10:21:58.730291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.879 [2024-12-12 10:21:58.743862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.743881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.757625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.757649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.771553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.771576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.785404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.785428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.799111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.799130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.813049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.813068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.826524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.826545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.840398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.840416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.854160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.854179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.868108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.868127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.882176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.882195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.880 [2024-12-12 10:21:58.895937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.880 [2024-12-12 10:21:58.895956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:58.909847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:58.909868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:58.923772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:58.923791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:58.937439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:58.937464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:58.950841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:58.950859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 16977.40 IOPS, 132.64 MiB/s 00:08:25.139 Latency(us) 00:08:25.139 [2024-12-12T09:21:59.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.139 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:25.139 Nvme1n1 : 5.01 16979.50 132.65 0.00 0.00 7531.30 3432.84 17601.10 00:08:25.139 [2024-12-12T09:21:59.162Z] =================================================================================================================== 00:08:25.139 [2024-12-12T09:21:59.162Z] Total : 16979.50 132.65 0.00 0.00 7531.30 3432.84 17601.10 00:08:25.139 [2024-12-12 10:21:58.960739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:58.960758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:58.972767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:58.972782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:58.984809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:58.984836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:58.996846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:58.996864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:59.008874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:59.008888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:59.020896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:59.020911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.139 [2024-12-12 10:21:59.032927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.139 [2024-12-12 10:21:59.032941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.140 [2024-12-12 10:21:59.044959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.140 [2024-12-12 10:21:59.044975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.140 [2024-12-12 10:21:59.056988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.140 [2024-12-12 10:21:59.057002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.140 [2024-12-12 10:21:59.069017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.140 [2024-12-12 10:21:59.069030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.140 [2024-12-12 10:21:59.081054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.140 [2024-12-12 10:21:59.081065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.140 [2024-12-12 10:21:59.093088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.140 [2024-12-12 10:21:59.093100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.140 [2024-12-12 10:21:59.105116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.140 [2024-12-12 10:21:59.105127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.140 [2024-12-12 10:21:59.117148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.140 [2024-12-12 10:21:59.117158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1394545) - No such process 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1394545 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.140 delay0 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.140 10:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:25.398 [2024-12-12 10:21:59.262367] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:31.961 Initializing NVMe Controllers 00:08:31.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:31.961 Initialization complete. Launching workers. 00:08:31.961 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 102 00:08:31.961 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 389, failed to submit 33 00:08:31.961 success 203, unsuccessful 186, failed 0 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:31.961 rmmod nvme_tcp 00:08:31.961 rmmod nvme_fabrics 00:08:31.961 rmmod nvme_keyring 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1392735 ']' 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1392735 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1392735 ']' 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1392735 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1392735 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1392735' 00:08:31.961 killing process with pid 1392735 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1392735 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1392735 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.961 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.866 00:08:33.866 real 0m31.451s 00:08:33.866 user 0m42.182s 00:08:33.866 sys 0m11.020s 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.866 ************************************ 00:08:33.866 END TEST nvmf_zcopy 00:08:33.866 ************************************ 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.866 ************************************ 00:08:33.866 START TEST nvmf_nmic 00:08:33.866 ************************************ 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:33.866 * Looking for test storage... 00:08:33.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:33.866 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:34.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.126 --rc genhtml_branch_coverage=1 00:08:34.126 --rc genhtml_function_coverage=1 00:08:34.126 --rc genhtml_legend=1 00:08:34.126 --rc geninfo_all_blocks=1 00:08:34.126 --rc geninfo_unexecuted_blocks=1 00:08:34.126 00:08:34.126 ' 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:34.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.126 --rc genhtml_branch_coverage=1 00:08:34.126 --rc genhtml_function_coverage=1 00:08:34.126 --rc genhtml_legend=1 00:08:34.126 --rc geninfo_all_blocks=1 00:08:34.126 --rc geninfo_unexecuted_blocks=1 00:08:34.126 00:08:34.126 ' 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:34.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.126 --rc genhtml_branch_coverage=1 00:08:34.126 --rc genhtml_function_coverage=1 00:08:34.126 --rc genhtml_legend=1 00:08:34.126 --rc geninfo_all_blocks=1 00:08:34.126 --rc geninfo_unexecuted_blocks=1 00:08:34.126 00:08:34.126 ' 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:34.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.126 --rc genhtml_branch_coverage=1 00:08:34.126 --rc genhtml_function_coverage=1 00:08:34.126 --rc genhtml_legend=1 00:08:34.126 --rc geninfo_all_blocks=1 00:08:34.126 --rc geninfo_unexecuted_blocks=1 00:08:34.126 00:08:34.126 ' 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.126 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.127 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:40.696 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:40.696 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:40.696 Found net devices under 0000:af:00.0: cvl_0_0 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:40.696 Found net devices under 0000:af:00.1: cvl_0_1 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:40.696 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:08:40.697 00:08:40.697 --- 10.0.0.2 ping statistics --- 00:08:40.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.697 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:08:40.697 00:08:40.697 --- 10.0.0.1 ping statistics --- 00:08:40.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.697 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1400034 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1400034 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1400034 ']' 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.697 10:22:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.697 [2024-12-12 10:22:14.012254] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:40.697 [2024-12-12 10:22:14.012298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.697 [2024-12-12 10:22:14.094723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.697 [2024-12-12 10:22:14.137779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.697 [2024-12-12 10:22:14.137814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.697 [2024-12-12 10:22:14.137822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.697 [2024-12-12 10:22:14.137828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.697 [2024-12-12 10:22:14.137833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.697 [2024-12-12 10:22:14.139264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.697 [2024-12-12 10:22:14.139290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.697 [2024-12-12 10:22:14.139398] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.697 [2024-12-12 10:22:14.139399] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.956 [2024-12-12 10:22:14.899359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.956 Malloc0 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.956 [2024-12-12 10:22:14.965650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:40.956 test case1: single bdev can't be used in multiple subsystems 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.956 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:41.215 [2024-12-12 10:22:14.993560] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:41.215 [2024-12-12 10:22:14.993584] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:41.215 [2024-12-12 10:22:14.993595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.215 request: 00:08:41.215 { 00:08:41.215 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:41.215 "namespace": { 00:08:41.215 "bdev_name": "Malloc0", 00:08:41.215 "no_auto_visible": false, 00:08:41.215 "hide_metadata": false 00:08:41.215 }, 00:08:41.215 "method": "nvmf_subsystem_add_ns", 00:08:41.215 "req_id": 1 00:08:41.215 } 00:08:41.215 Got JSON-RPC error response 00:08:41.215 response: 00:08:41.215 { 00:08:41.215 "code": -32602, 00:08:41.215 "message": "Invalid parameters" 00:08:41.215 } 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:41.215 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:41.216 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:41.216 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:41.216 Adding namespace failed - expected result. 00:08:41.216 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:41.216 test case2: host connect to nvmf target in multiple paths 00:08:41.216 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:41.216 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.216 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:41.216 [2024-12-12 10:22:15.005677] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:41.216 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.216 10:22:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.591 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:43.523 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:43.523 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:43.523 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:43.523 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:43.523 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:45.423 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:45.423 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:45.423 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:45.423 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:45.423 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:45.423 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:45.423 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:45.423 [global] 00:08:45.423 thread=1 00:08:45.423 invalidate=1 00:08:45.423 rw=write 00:08:45.423 time_based=1 00:08:45.423 runtime=1 00:08:45.423 ioengine=libaio 00:08:45.423 direct=1 00:08:45.423 bs=4096 00:08:45.423 iodepth=1 00:08:45.423 norandommap=0 00:08:45.423 numjobs=1 00:08:45.423 00:08:45.423 verify_dump=1 00:08:45.423 verify_backlog=512 00:08:45.423 verify_state_save=0 00:08:45.423 do_verify=1 00:08:45.423 verify=crc32c-intel 00:08:45.423 [job0] 00:08:45.423 filename=/dev/nvme0n1 00:08:45.423 Could not set queue depth (nvme0n1) 00:08:45.680 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:45.680 fio-3.35 00:08:45.680 Starting 1 thread 00:08:47.055 00:08:47.055 job0: (groupid=0, jobs=1): err= 0: pid=1401088: Thu Dec 12 10:22:20 2024 00:08:47.055 read: IOPS=355, BW=1424KiB/s (1458kB/s)(1468KiB/1031msec) 00:08:47.055 slat (nsec): min=6549, max=26719, avg=8144.57, stdev=3817.78 00:08:47.055 clat (usec): min=201, max=42112, avg=2603.08, stdev=9529.32 00:08:47.055 lat (usec): min=208, max=42133, avg=2611.23, stdev=9532.65 00:08:47.055 clat percentiles (usec): 00:08:47.055 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:08:47.055 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:08:47.055 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 388], 95.00th=[41157], 00:08:47.055 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:47.055 | 99.99th=[42206] 00:08:47.055 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:08:47.055 slat (nsec): min=9254, max=36460, avg=10330.88, stdev=1927.64 00:08:47.055 clat (usec): min=110, max=368, avg=127.33, stdev=15.50 00:08:47.055 lat (usec): min=120, max=402, avg=137.66, stdev=16.88 00:08:47.055 clat percentiles (usec): 00:08:47.055 | 1.00th=[ 115], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 122], 00:08:47.055 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 127], 00:08:47.055 | 70.00th=[ 128], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 145], 00:08:47.055 | 99.00th=[ 169], 99.50th=[ 172], 99.90th=[ 367], 99.95th=[ 367], 00:08:47.055 | 99.99th=[ 367] 00:08:47.055 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:47.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:47.055 lat (usec) : 250=72.92%, 500=24.57%, 750=0.11% 00:08:47.055 lat (msec) : 50=2.39% 00:08:47.055 cpu : usr=0.49%, sys=0.78%, ctx=879, majf=0, minf=1 00:08:47.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.055 issued rwts: total=367,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.055 00:08:47.055 Run status group 0 (all jobs): 00:08:47.055 READ: bw=1424KiB/s (1458kB/s), 1424KiB/s-1424KiB/s (1458kB/s-1458kB/s), io=1468KiB (1503kB), run=1031-1031msec 00:08:47.055 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:08:47.055 00:08:47.055 Disk stats (read/write): 00:08:47.055 nvme0n1: ios=413/512, merge=0/0, ticks=809/63, in_queue=872, util=91.48% 00:08:47.055 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.055 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.055 rmmod nvme_tcp 00:08:47.314 rmmod nvme_fabrics 00:08:47.314 rmmod nvme_keyring 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1400034 ']' 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1400034 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1400034 ']' 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1400034 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1400034 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1400034' 00:08:47.314 killing process with pid 1400034 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1400034 00:08:47.314 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1400034 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.573 10:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.477 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.477 00:08:49.477 real 0m15.661s 00:08:49.477 user 0m36.234s 00:08:49.477 sys 0m5.296s 00:08:49.477 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.477 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 ************************************ 00:08:49.477 END TEST nvmf_nmic 00:08:49.477 ************************************ 00:08:49.477 10:22:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:49.477 10:22:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:49.477 10:22:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.477 10:22:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.737 ************************************ 00:08:49.737 START TEST nvmf_fio_target 00:08:49.737 ************************************ 00:08:49.737 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:49.737 * Looking for test storage... 00:08:49.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.737 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.737 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.737 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.738 --rc genhtml_branch_coverage=1 00:08:49.738 --rc genhtml_function_coverage=1 00:08:49.738 --rc genhtml_legend=1 00:08:49.738 --rc geninfo_all_blocks=1 00:08:49.738 --rc geninfo_unexecuted_blocks=1 00:08:49.738 00:08:49.738 ' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.738 --rc genhtml_branch_coverage=1 00:08:49.738 --rc genhtml_function_coverage=1 00:08:49.738 --rc genhtml_legend=1 00:08:49.738 --rc geninfo_all_blocks=1 00:08:49.738 --rc geninfo_unexecuted_blocks=1 00:08:49.738 00:08:49.738 ' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.738 --rc genhtml_branch_coverage=1 00:08:49.738 --rc genhtml_function_coverage=1 00:08:49.738 --rc genhtml_legend=1 00:08:49.738 --rc geninfo_all_blocks=1 00:08:49.738 --rc geninfo_unexecuted_blocks=1 00:08:49.738 00:08:49.738 ' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.738 --rc genhtml_branch_coverage=1 00:08:49.738 --rc genhtml_function_coverage=1 00:08:49.738 --rc genhtml_legend=1 00:08:49.738 --rc geninfo_all_blocks=1 00:08:49.738 --rc geninfo_unexecuted_blocks=1 00:08:49.738 00:08:49.738 ' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:49.738 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.739 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.739 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.739 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:49.739 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:49.739 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:49.739 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:56.315 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:56.315 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:56.315 Found net devices under 0000:af:00.0: cvl_0_0 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:56.315 Found net devices under 0000:af:00.1: cvl_0_1 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:56.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:08:56.315 00:08:56.315 --- 10.0.0.2 ping statistics --- 00:08:56.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.315 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:08:56.315 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:08:56.315 00:08:56.316 --- 10.0.0.1 ping statistics --- 00:08:56.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.316 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1404796 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1404796 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1404796 ']' 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:56.316 [2024-12-12 10:22:29.744544] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:08:56.316 [2024-12-12 10:22:29.744603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.316 [2024-12-12 10:22:29.823821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.316 [2024-12-12 10:22:29.865277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.316 [2024-12-12 10:22:29.865313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.316 [2024-12-12 10:22:29.865320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.316 [2024-12-12 10:22:29.865326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.316 [2024-12-12 10:22:29.865331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.316 [2024-12-12 10:22:29.866786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.316 [2024-12-12 10:22:29.866895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.316 [2024-12-12 10:22:29.867003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.316 [2024-12-12 10:22:29.867003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.316 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:56.316 [2024-12-12 10:22:30.185494] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.316 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:56.574 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:56.574 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:56.832 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:56.832 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:57.090 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:57.090 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:57.090 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:57.090 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:57.347 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:57.606 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:57.606 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:57.864 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:57.864 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.122 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:58.122 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:58.380 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:58.380 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:58.380 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:58.637 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:58.637 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:58.896 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.154 [2024-12-12 10:22:32.922010] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.154 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:59.154 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:59.412 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.785 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:00.785 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:00.785 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.785 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:00.785 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:00.785 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:02.683 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:02.683 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:02.683 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:02.683 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:02.683 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:02.683 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:02.683 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:02.683 [global] 00:09:02.683 thread=1 00:09:02.683 invalidate=1 00:09:02.683 rw=write 00:09:02.683 time_based=1 00:09:02.683 runtime=1 00:09:02.683 ioengine=libaio 00:09:02.683 direct=1 00:09:02.683 bs=4096 00:09:02.683 iodepth=1 00:09:02.683 norandommap=0 00:09:02.683 numjobs=1 00:09:02.683 00:09:02.683 verify_dump=1 00:09:02.683 verify_backlog=512 00:09:02.683 verify_state_save=0 00:09:02.683 do_verify=1 00:09:02.683 verify=crc32c-intel 00:09:02.683 [job0] 00:09:02.683 filename=/dev/nvme0n1 00:09:02.683 [job1] 00:09:02.683 filename=/dev/nvme0n2 00:09:02.683 [job2] 00:09:02.683 filename=/dev/nvme0n3 00:09:02.683 [job3] 00:09:02.683 filename=/dev/nvme0n4 00:09:02.683 Could not set queue depth (nvme0n1) 00:09:02.683 Could not set queue depth (nvme0n2) 00:09:02.683 Could not set queue depth (nvme0n3) 00:09:02.683 Could not set queue depth (nvme0n4) 00:09:02.941 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.941 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.941 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.941 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.941 fio-3.35 00:09:02.941 Starting 4 threads 00:09:04.332 00:09:04.332 job0: (groupid=0, jobs=1): err= 0: pid=1406131: Thu Dec 12 10:22:38 2024 00:09:04.332 read: IOPS=23, BW=92.4KiB/s (94.6kB/s)(96.0KiB/1039msec) 00:09:04.332 slat (nsec): min=10471, max=24169, avg=17420.54, stdev=4489.64 00:09:04.332 clat (usec): min=386, max=41990, avg=39346.97, stdev=8304.15 00:09:04.332 lat (usec): min=410, max=42005, avg=39364.39, stdev=8302.74 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[ 388], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:04.332 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:04.332 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:04.332 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:04.332 | 99.99th=[42206] 00:09:04.332 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:09:04.332 slat (nsec): min=11101, max=40088, avg=13374.74, stdev=2363.22 00:09:04.332 clat (usec): min=127, max=296, avg=166.78, stdev=18.25 00:09:04.332 lat (usec): min=139, max=309, avg=180.15, stdev=18.32 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 149], 00:09:04.332 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:09:04.332 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:09:04.332 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 297], 99.95th=[ 297], 00:09:04.332 | 99.99th=[ 297] 00:09:04.332 bw ( KiB/s): min= 4096, max= 4096, per=23.79%, avg=4096.00, stdev= 0.00, samples=1 00:09:04.332 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:04.332 lat (usec) : 250=95.34%, 500=0.37% 00:09:04.332 lat (msec) : 50=4.29% 00:09:04.332 cpu : usr=0.29%, sys=0.58%, ctx=537, majf=0, minf=1 00:09:04.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.332 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.332 job1: (groupid=0, jobs=1): err= 0: pid=1406143: Thu Dec 12 10:22:38 2024 00:09:04.332 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:04.332 slat (nsec): min=7350, max=47903, avg=8579.56, stdev=1742.12 00:09:04.332 clat (usec): min=151, max=614, avg=194.18, stdev=30.69 00:09:04.332 lat (usec): min=159, max=622, avg=202.76, stdev=30.74 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:09:04.332 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:09:04.332 | 70.00th=[ 196], 80.00th=[ 229], 90.00th=[ 247], 95.00th=[ 255], 00:09:04.332 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 363], 00:09:04.332 | 99.99th=[ 611] 00:09:04.332 write: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec); 0 zone resets 00:09:04.332 slat (usec): min=10, max=23827, avg=20.33, stdev=439.53 00:09:04.332 clat (usec): min=107, max=321, avg=137.88, stdev=18.36 00:09:04.332 lat (usec): min=119, max=24006, avg=158.21, stdev=440.71 00:09:04.332 clat percentiles (usec): 00:09:04.332 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 125], 00:09:04.332 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:09:04.332 | 70.00th=[ 141], 80.00th=[ 149], 90.00th=[ 165], 95.00th=[ 178], 00:09:04.332 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 277], 99.95th=[ 314], 00:09:04.332 | 99.99th=[ 322] 00:09:04.332 bw ( KiB/s): min=12288, max=12288, per=71.37%, avg=12288.00, stdev= 0.00, samples=1 00:09:04.332 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:04.332 lat (usec) : 250=96.29%, 500=3.69%, 750=0.02% 00:09:04.332 cpu : usr=5.30%, sys=8.10%, ctx=5498, majf=0, minf=1 00:09:04.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.332 issued rwts: total=2560,2936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.332 job2: (groupid=0, jobs=1): err= 0: pid=1406160: Thu Dec 12 10:22:38 2024 00:09:04.332 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:09:04.332 slat (nsec): min=9864, max=28736, avg=22483.59, stdev=3106.58 00:09:04.332 clat (usec): min=40857, max=43049, avg=41050.22, stdev=448.87 00:09:04.333 lat (usec): min=40867, max=43078, avg=41072.71, stdev=450.38 00:09:04.333 clat percentiles (usec): 00:09:04.333 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:04.333 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:04.333 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:04.333 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:04.333 | 99.99th=[43254] 00:09:04.333 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:09:04.333 slat (nsec): min=10945, max=44915, avg=13855.56, stdev=2047.29 00:09:04.333 clat (usec): min=129, max=274, avg=181.88, stdev=20.79 00:09:04.333 lat (usec): min=141, max=304, avg=195.74, stdev=21.46 00:09:04.333 clat percentiles (usec): 00:09:04.333 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 161], 20.00th=[ 167], 00:09:04.333 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:09:04.333 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 223], 00:09:04.333 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 273], 00:09:04.333 | 99.99th=[ 273] 00:09:04.333 bw ( KiB/s): min= 4096, max= 4096, per=23.79%, avg=4096.00, stdev= 0.00, samples=1 00:09:04.333 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:04.333 lat (usec) : 250=95.51%, 500=0.37% 00:09:04.333 lat (msec) : 50=4.12% 00:09:04.333 cpu : usr=1.00%, sys=0.50%, ctx=535, majf=0, minf=1 00:09:04.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.333 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.333 job3: (groupid=0, jobs=1): err= 0: pid=1406166: Thu Dec 12 10:22:38 2024 00:09:04.333 read: IOPS=25, BW=104KiB/s (106kB/s)(104KiB/1001msec) 00:09:04.333 slat (nsec): min=7079, max=23858, avg=20149.77, stdev=5592.13 00:09:04.333 clat (usec): min=247, max=42027, avg=34773.11, stdev=14989.11 00:09:04.333 lat (usec): min=256, max=42051, avg=34793.26, stdev=14989.56 00:09:04.333 clat percentiles (usec): 00:09:04.333 | 1.00th=[ 249], 5.00th=[ 281], 10.00th=[ 297], 20.00th=[40633], 00:09:04.333 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:04.333 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:04.333 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:04.333 | 99.99th=[42206] 00:09:04.333 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:04.333 slat (nsec): min=10532, max=41365, avg=12785.47, stdev=2109.05 00:09:04.333 clat (usec): min=143, max=344, avg=172.14, stdev=14.95 00:09:04.333 lat (usec): min=155, max=385, avg=184.93, stdev=15.74 00:09:04.333 clat percentiles (usec): 00:09:04.333 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:09:04.333 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:09:04.333 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:09:04.333 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 347], 99.95th=[ 347], 00:09:04.333 | 99.99th=[ 347] 00:09:04.333 bw ( KiB/s): min= 4096, max= 4096, per=23.79%, avg=4096.00, stdev= 0.00, samples=1 00:09:04.333 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:04.333 lat (usec) : 250=94.98%, 500=0.93% 00:09:04.333 lat (msec) : 50=4.09% 00:09:04.333 cpu : usr=0.60%, sys=0.40%, ctx=538, majf=0, minf=1 00:09:04.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.333 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.333 00:09:04.333 Run status group 0 (all jobs): 00:09:04.333 READ: bw=9.89MiB/s (10.4MB/s), 87.5KiB/s-9.99MiB/s (89.6kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1039msec 00:09:04.333 WRITE: bw=16.8MiB/s (17.6MB/s), 1971KiB/s-11.5MiB/s (2018kB/s-12.0MB/s), io=17.5MiB (18.3MB), run=1001-1039msec 00:09:04.333 00:09:04.333 Disk stats (read/write): 00:09:04.333 nvme0n1: ios=46/512, merge=0/0, ticks=1726/83, in_queue=1809, util=98.10% 00:09:04.333 nvme0n2: ios=2099/2560, merge=0/0, ticks=1362/337, in_queue=1699, util=98.48% 00:09:04.333 nvme0n3: ios=42/512, merge=0/0, ticks=1728/91, in_queue=1819, util=98.44% 00:09:04.333 nvme0n4: ios=46/512, merge=0/0, ticks=972/89, in_queue=1061, util=90.87% 00:09:04.333 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:04.333 [global] 00:09:04.333 thread=1 00:09:04.333 invalidate=1 00:09:04.333 rw=randwrite 00:09:04.333 time_based=1 00:09:04.333 runtime=1 00:09:04.333 ioengine=libaio 00:09:04.333 direct=1 00:09:04.333 bs=4096 00:09:04.333 iodepth=1 00:09:04.333 norandommap=0 00:09:04.333 numjobs=1 00:09:04.333 00:09:04.333 verify_dump=1 00:09:04.333 verify_backlog=512 00:09:04.333 verify_state_save=0 00:09:04.333 do_verify=1 00:09:04.333 verify=crc32c-intel 00:09:04.333 [job0] 00:09:04.333 filename=/dev/nvme0n1 00:09:04.333 [job1] 00:09:04.333 filename=/dev/nvme0n2 00:09:04.333 [job2] 00:09:04.333 filename=/dev/nvme0n3 00:09:04.333 [job3] 00:09:04.333 filename=/dev/nvme0n4 00:09:04.333 Could not set queue depth (nvme0n1) 00:09:04.333 Could not set queue depth (nvme0n2) 00:09:04.333 Could not set queue depth (nvme0n3) 00:09:04.333 Could not set queue depth (nvme0n4) 00:09:04.605 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.605 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.605 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.605 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.605 fio-3.35 00:09:04.605 Starting 4 threads 00:09:05.979 00:09:05.979 job0: (groupid=0, jobs=1): err= 0: pid=1406617: Thu Dec 12 10:22:39 2024 00:09:05.979 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:05.979 slat (nsec): min=7350, max=26143, avg=8309.90, stdev=1594.90 00:09:05.979 clat (usec): min=183, max=41106, avg=439.22, stdev=2745.41 00:09:05.979 lat (usec): min=191, max=41118, avg=447.53, stdev=2746.20 00:09:05.979 clat percentiles (usec): 00:09:05.979 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 233], 00:09:05.979 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:09:05.979 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 310], 95.00th=[ 322], 00:09:05.979 | 99.00th=[ 498], 99.50th=[ 603], 99.90th=[41157], 99.95th=[41157], 00:09:05.979 | 99.99th=[41157] 00:09:05.979 write: IOPS=1805, BW=7221KiB/s (7394kB/s)(7228KiB/1001msec); 0 zone resets 00:09:05.979 slat (nsec): min=9640, max=44709, avg=10831.03, stdev=2142.63 00:09:05.979 clat (usec): min=110, max=318, avg=156.71, stdev=23.02 00:09:05.979 lat (usec): min=124, max=357, avg=167.54, stdev=23.36 00:09:05.979 clat percentiles (usec): 00:09:05.979 | 1.00th=[ 121], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 137], 00:09:05.979 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 153], 60.00th=[ 163], 00:09:05.979 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 196], 00:09:05.979 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 314], 99.95th=[ 318], 00:09:05.979 | 99.99th=[ 318] 00:09:05.979 bw ( KiB/s): min=10464, max=10464, per=49.50%, avg=10464.00, stdev= 0.00, samples=1 00:09:05.979 iops : min= 2616, max= 2616, avg=2616.00, stdev= 0.00, samples=1 00:09:05.979 lat (usec) : 250=83.22%, 500=16.36%, 750=0.21% 00:09:05.979 lat (msec) : 50=0.21% 00:09:05.979 cpu : usr=3.00%, sys=5.00%, ctx=3343, majf=0, minf=2 00:09:05.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.979 issued rwts: total=1536,1807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.979 job1: (groupid=0, jobs=1): err= 0: pid=1406633: Thu Dec 12 10:22:39 2024 00:09:05.979 read: IOPS=37, BW=149KiB/s (153kB/s)(152KiB/1020msec) 00:09:05.979 slat (nsec): min=7122, max=23644, avg=8846.18, stdev=2751.00 00:09:05.979 clat (usec): min=191, max=41973, avg=23837.37, stdev=20408.83 00:09:05.979 lat (usec): min=198, max=41983, avg=23846.22, stdev=20409.27 00:09:05.979 clat percentiles (usec): 00:09:05.979 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 227], 00:09:05.979 | 30.00th=[ 235], 40.00th=[ 260], 50.00th=[40633], 60.00th=[41157], 00:09:05.979 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:05.979 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:05.979 | 99.99th=[42206] 00:09:05.979 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:09:05.979 slat (nsec): min=9423, max=38238, avg=11291.20, stdev=2528.06 00:09:05.979 clat (usec): min=113, max=389, avg=207.65, stdev=62.92 00:09:05.979 lat (usec): min=125, max=428, avg=218.95, stdev=63.25 00:09:05.979 clat percentiles (usec): 00:09:05.979 | 1.00th=[ 119], 5.00th=[ 133], 10.00th=[ 143], 20.00th=[ 169], 00:09:05.979 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:09:05.979 | 70.00th=[ 208], 80.00th=[ 229], 90.00th=[ 334], 95.00th=[ 343], 00:09:05.979 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 392], 99.95th=[ 392], 00:09:05.979 | 99.99th=[ 392] 00:09:05.979 bw ( KiB/s): min= 4096, max= 4096, per=19.37%, avg=4096.00, stdev= 0.00, samples=1 00:09:05.979 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:05.979 lat (usec) : 250=79.09%, 500=16.91% 00:09:05.979 lat (msec) : 50=4.00% 00:09:05.979 cpu : usr=0.00%, sys=0.69%, ctx=551, majf=0, minf=1 00:09:05.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.979 issued rwts: total=38,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.979 job2: (groupid=0, jobs=1): err= 0: pid=1406651: Thu Dec 12 10:22:39 2024 00:09:05.979 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:09:05.979 slat (nsec): min=9954, max=22071, avg=15269.23, stdev=5458.42 00:09:05.979 clat (usec): min=40747, max=41983, avg=41018.76, stdev=227.39 00:09:05.979 lat (usec): min=40757, max=41993, avg=41034.03, stdev=226.23 00:09:05.979 clat percentiles (usec): 00:09:05.979 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:05.979 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:05.979 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:05.979 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:05.979 | 99.99th=[42206] 00:09:05.979 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:05.979 slat (nsec): min=10812, max=37100, avg=12801.72, stdev=2372.51 00:09:05.979 clat (usec): min=139, max=308, avg=181.12, stdev=21.58 00:09:05.979 lat (usec): min=150, max=345, avg=193.92, stdev=22.22 00:09:05.979 clat percentiles (usec): 00:09:05.979 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 163], 00:09:05.979 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:09:05.979 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 221], 00:09:05.979 | 99.00th=[ 243], 99.50th=[ 262], 99.90th=[ 310], 99.95th=[ 310], 00:09:05.979 | 99.99th=[ 310] 00:09:05.979 bw ( KiB/s): min= 4096, max= 4096, per=19.37%, avg=4096.00, stdev= 0.00, samples=1 00:09:05.979 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:05.979 lat (usec) : 250=95.13%, 500=0.75% 00:09:05.979 lat (msec) : 50=4.12% 00:09:05.979 cpu : usr=0.30%, sys=0.60%, ctx=534, majf=0, minf=1 00:09:05.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.979 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.979 job3: (groupid=0, jobs=1): err= 0: pid=1406657: Thu Dec 12 10:22:39 2024 00:09:05.979 read: IOPS=2354, BW=9419KiB/s (9645kB/s)(9428KiB/1001msec) 00:09:05.979 slat (nsec): min=7328, max=23777, avg=8289.97, stdev=1219.90 00:09:05.979 clat (usec): min=161, max=499, avg=233.61, stdev=38.33 00:09:05.979 lat (usec): min=168, max=507, avg=241.90, stdev=38.39 00:09:05.979 clat percentiles (usec): 00:09:05.979 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 196], 00:09:05.979 | 30.00th=[ 212], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:09:05.979 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 285], 95.00th=[ 297], 00:09:05.979 | 99.00th=[ 314], 99.50th=[ 371], 99.90th=[ 490], 99.95th=[ 498], 00:09:05.979 | 99.99th=[ 498] 00:09:05.979 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:05.979 slat (nsec): min=10577, max=35268, avg=11589.94, stdev=1485.75 00:09:05.979 clat (usec): min=117, max=291, avg=150.73, stdev=20.47 00:09:05.979 lat (usec): min=129, max=303, avg=162.32, stdev=20.74 00:09:05.979 clat percentiles (usec): 00:09:05.979 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 135], 00:09:05.979 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:09:05.979 | 70.00th=[ 157], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 190], 00:09:05.979 | 99.00th=[ 212], 99.50th=[ 229], 99.90th=[ 251], 99.95th=[ 285], 00:09:05.979 | 99.99th=[ 293] 00:09:05.979 bw ( KiB/s): min=10728, max=10728, per=50.74%, avg=10728.00, stdev= 0.00, samples=1 00:09:05.979 iops : min= 2682, max= 2682, avg=2682.00, stdev= 0.00, samples=1 00:09:05.979 lat (usec) : 250=86.78%, 500=13.22% 00:09:05.979 cpu : usr=3.40%, sys=8.40%, ctx=4918, majf=0, minf=1 00:09:05.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.979 issued rwts: total=2357,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.979 00:09:05.979 Run status group 0 (all jobs): 00:09:05.979 READ: bw=15.1MiB/s (15.9MB/s), 87.6KiB/s-9419KiB/s (89.8kB/s-9645kB/s), io=15.4MiB (16.2MB), run=1001-1020msec 00:09:05.979 WRITE: bw=20.6MiB/s (21.6MB/s), 2008KiB/s-9.99MiB/s (2056kB/s-10.5MB/s), io=21.1MiB (22.1MB), run=1001-1020msec 00:09:05.979 00:09:05.979 Disk stats (read/write): 00:09:05.979 nvme0n1: ios=1582/1536, merge=0/0, ticks=544/223, in_queue=767, util=86.77% 00:09:05.979 nvme0n2: ios=52/512, merge=0/0, ticks=1726/106, in_queue=1832, util=98.48% 00:09:05.979 nvme0n3: ios=45/512, merge=0/0, ticks=1422/92, in_queue=1514, util=100.00% 00:09:05.979 nvme0n4: ios=2073/2105, merge=0/0, ticks=1436/300, in_queue=1736, util=98.43% 00:09:05.979 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:05.979 [global] 00:09:05.979 thread=1 00:09:05.979 invalidate=1 00:09:05.979 rw=write 00:09:05.979 time_based=1 00:09:05.979 runtime=1 00:09:05.979 ioengine=libaio 00:09:05.979 direct=1 00:09:05.979 bs=4096 00:09:05.979 iodepth=128 00:09:05.979 norandommap=0 00:09:05.979 numjobs=1 00:09:05.979 00:09:05.979 verify_dump=1 00:09:05.979 verify_backlog=512 00:09:05.979 verify_state_save=0 00:09:05.979 do_verify=1 00:09:05.979 verify=crc32c-intel 00:09:05.979 [job0] 00:09:05.979 filename=/dev/nvme0n1 00:09:05.979 [job1] 00:09:05.979 filename=/dev/nvme0n2 00:09:05.979 [job2] 00:09:05.979 filename=/dev/nvme0n3 00:09:05.979 [job3] 00:09:05.979 filename=/dev/nvme0n4 00:09:05.980 Could not set queue depth (nvme0n1) 00:09:05.980 Could not set queue depth (nvme0n2) 00:09:05.980 Could not set queue depth (nvme0n3) 00:09:05.980 Could not set queue depth (nvme0n4) 00:09:05.980 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.980 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.980 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.980 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.980 fio-3.35 00:09:05.980 Starting 4 threads 00:09:07.354 00:09:07.354 job0: (groupid=0, jobs=1): err= 0: pid=1407059: Thu Dec 12 10:22:41 2024 00:09:07.354 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:07.354 slat (nsec): min=1271, max=18515k, avg=96578.20, stdev=886204.52 00:09:07.354 clat (usec): min=722, max=46072, avg=13213.69, stdev=7642.00 00:09:07.354 lat (usec): min=727, max=46082, avg=13310.27, stdev=7726.09 00:09:07.354 clat percentiles (usec): 00:09:07.354 | 1.00th=[ 3916], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[ 8291], 00:09:07.354 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10552], 00:09:07.354 | 70.00th=[12518], 80.00th=[18220], 90.00th=[26346], 95.00th=[28181], 00:09:07.354 | 99.00th=[39584], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:09:07.354 | 99.99th=[45876] 00:09:07.354 write: IOPS=5188, BW=20.3MiB/s (21.3MB/s)(20.3MiB/1003msec); 0 zone resets 00:09:07.354 slat (nsec): min=1952, max=16406k, avg=75581.66, stdev=631383.25 00:09:07.354 clat (usec): min=322, max=46004, avg=11431.05, stdev=7364.36 00:09:07.354 lat (usec): min=881, max=46008, avg=11506.64, stdev=7414.00 00:09:07.354 clat percentiles (usec): 00:09:07.354 | 1.00th=[ 2114], 5.00th=[ 4817], 10.00th=[ 5538], 20.00th=[ 6849], 00:09:07.354 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 8291], 60.00th=[ 9634], 00:09:07.354 | 70.00th=[12780], 80.00th=[16450], 90.00th=[21365], 95.00th=[28443], 00:09:07.354 | 99.00th=[34866], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:09:07.354 | 99.99th=[45876] 00:09:07.354 bw ( KiB/s): min=20208, max=20752, per=28.76%, avg=20480.00, stdev=384.67, samples=2 00:09:07.354 iops : min= 5052, max= 5188, avg=5120.00, stdev=96.17, samples=2 00:09:07.355 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.08% 00:09:07.355 lat (msec) : 2=0.74%, 4=1.52%, 10=54.90%, 20=27.44%, 50=15.28% 00:09:07.355 cpu : usr=3.59%, sys=6.39%, ctx=320, majf=0, minf=1 00:09:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.355 issued rwts: total=5120,5204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.355 job1: (groupid=0, jobs=1): err= 0: pid=1407060: Thu Dec 12 10:22:41 2024 00:09:07.355 read: IOPS=4671, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1004msec) 00:09:07.355 slat (nsec): min=1636, max=8937.1k, avg=94170.14, stdev=564141.03 00:09:07.355 clat (usec): min=594, max=22550, avg=12228.64, stdev=2927.46 00:09:07.355 lat (usec): min=4722, max=30716, avg=12322.81, stdev=2963.71 00:09:07.355 clat percentiles (usec): 00:09:07.355 | 1.00th=[ 7439], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[ 9896], 00:09:07.355 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11600], 60.00th=[12387], 00:09:07.355 | 70.00th=[13304], 80.00th=[14615], 90.00th=[16188], 95.00th=[17695], 00:09:07.355 | 99.00th=[21103], 99.50th=[21890], 99.90th=[22152], 99.95th=[22152], 00:09:07.355 | 99.99th=[22676] 00:09:07.355 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:09:07.355 slat (usec): min=2, max=11420, avg=103.01, stdev=624.94 00:09:07.355 clat (usec): min=5761, max=30488, avg=13599.84, stdev=4024.80 00:09:07.355 lat (usec): min=5769, max=30522, avg=13702.85, stdev=4076.44 00:09:07.355 clat percentiles (usec): 00:09:07.355 | 1.00th=[ 7373], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10028], 00:09:07.355 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12518], 60.00th=[13304], 00:09:07.355 | 70.00th=[15664], 80.00th=[17433], 90.00th=[20055], 95.00th=[21365], 00:09:07.355 | 99.00th=[22938], 99.50th=[23462], 99.90th=[25560], 99.95th=[27919], 00:09:07.355 | 99.99th=[30540] 00:09:07.355 bw ( KiB/s): min=18360, max=22232, per=28.51%, avg=20296.00, stdev=2737.92, samples=2 00:09:07.355 iops : min= 4590, max= 5558, avg=5074.00, stdev=684.48, samples=2 00:09:07.355 lat (usec) : 750=0.01% 00:09:07.355 lat (msec) : 10=23.52%, 20=69.73%, 50=6.74% 00:09:07.355 cpu : usr=4.89%, sys=5.68%, ctx=483, majf=0, minf=1 00:09:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.355 issued rwts: total=4690,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.355 job2: (groupid=0, jobs=1): err= 0: pid=1407061: Thu Dec 12 10:22:41 2024 00:09:07.355 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:09:07.355 slat (nsec): min=1074, max=18092k, avg=160668.45, stdev=1116235.98 00:09:07.355 clat (usec): min=4171, max=53860, avg=19744.00, stdev=9612.25 00:09:07.355 lat (usec): min=4237, max=53886, avg=19904.67, stdev=9725.45 00:09:07.355 clat percentiles (usec): 00:09:07.355 | 1.00th=[ 6980], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11338], 00:09:07.355 | 30.00th=[11863], 40.00th=[13173], 50.00th=[14877], 60.00th=[22152], 00:09:07.355 | 70.00th=[26608], 80.00th=[30016], 90.00th=[33817], 95.00th=[38536], 00:09:07.355 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[45876], 00:09:07.355 | 99.99th=[53740] 00:09:07.355 write: IOPS=3522, BW=13.8MiB/s (14.4MB/s)(13.9MiB/1010msec); 0 zone resets 00:09:07.355 slat (nsec): min=1959, max=17749k, avg=133441.88, stdev=769287.50 00:09:07.355 clat (usec): min=1019, max=53803, avg=18970.77, stdev=11421.76 00:09:07.355 lat (usec): min=1029, max=53811, avg=19104.21, stdev=11507.57 00:09:07.355 clat percentiles (usec): 00:09:07.355 | 1.00th=[ 4178], 5.00th=[ 8094], 10.00th=[ 9241], 20.00th=[10814], 00:09:07.355 | 30.00th=[11338], 40.00th=[11731], 50.00th=[14615], 60.00th=[16909], 00:09:07.355 | 70.00th=[19530], 80.00th=[30540], 90.00th=[38536], 95.00th=[43779], 00:09:07.355 | 99.00th=[47973], 99.50th=[49546], 99.90th=[53740], 99.95th=[53740], 00:09:07.355 | 99.99th=[53740] 00:09:07.355 bw ( KiB/s): min=11064, max=16384, per=19.28%, avg=13724.00, stdev=3761.81, samples=2 00:09:07.355 iops : min= 2766, max= 4096, avg=3431.00, stdev=940.45, samples=2 00:09:07.355 lat (msec) : 2=0.03%, 4=0.30%, 10=7.95%, 20=56.26%, 50=35.22% 00:09:07.355 lat (msec) : 100=0.24% 00:09:07.355 cpu : usr=1.88%, sys=3.77%, ctx=381, majf=0, minf=2 00:09:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.355 issued rwts: total=3072,3558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.355 job3: (groupid=0, jobs=1): err= 0: pid=1407062: Thu Dec 12 10:22:41 2024 00:09:07.355 read: IOPS=3641, BW=14.2MiB/s (14.9MB/s)(14.4MiB/1010msec) 00:09:07.355 slat (nsec): min=1173, max=15110k, avg=126487.68, stdev=949937.42 00:09:07.355 clat (usec): min=1927, max=88269, avg=15267.23, stdev=9596.19 00:09:07.355 lat (usec): min=1932, max=88276, avg=15393.72, stdev=9687.57 00:09:07.355 clat percentiles (usec): 00:09:07.355 | 1.00th=[ 4621], 5.00th=[ 7373], 10.00th=[ 8848], 20.00th=[10552], 00:09:07.355 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12387], 60.00th=[13960], 00:09:07.355 | 70.00th=[16188], 80.00th=[18482], 90.00th=[23725], 95.00th=[27919], 00:09:07.355 | 99.00th=[72877], 99.50th=[80217], 99.90th=[88605], 99.95th=[88605], 00:09:07.355 | 99.99th=[88605] 00:09:07.355 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:09:07.355 slat (usec): min=2, max=19609, avg=118.32, stdev=757.50 00:09:07.355 clat (usec): min=2173, max=88247, avg=16914.20, stdev=13386.17 00:09:07.355 lat (usec): min=2188, max=88257, avg=17032.51, stdev=13450.16 00:09:07.355 clat percentiles (usec): 00:09:07.355 | 1.00th=[ 3490], 5.00th=[ 6128], 10.00th=[ 8717], 20.00th=[10683], 00:09:07.355 | 30.00th=[11338], 40.00th=[11469], 50.00th=[12649], 60.00th=[14484], 00:09:07.355 | 70.00th=[16450], 80.00th=[18744], 90.00th=[27132], 95.00th=[47449], 00:09:07.355 | 99.00th=[78119], 99.50th=[81265], 99.90th=[83362], 99.95th=[83362], 00:09:07.355 | 99.99th=[88605] 00:09:07.355 bw ( KiB/s): min=14064, max=18440, per=22.83%, avg=16252.00, stdev=3094.30, samples=2 00:09:07.355 iops : min= 3516, max= 4610, avg=4063.00, stdev=773.57, samples=2 00:09:07.355 lat (msec) : 2=0.10%, 4=1.00%, 10=13.56%, 20=68.73%, 50=13.34% 00:09:07.355 lat (msec) : 100=3.27% 00:09:07.355 cpu : usr=2.08%, sys=4.46%, ctx=423, majf=0, minf=1 00:09:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.355 issued rwts: total=3678,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.355 00:09:07.355 Run status group 0 (all jobs): 00:09:07.355 READ: bw=64.0MiB/s (67.2MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=64.7MiB (67.8MB), run=1003-1010msec 00:09:07.355 WRITE: bw=69.5MiB/s (72.9MB/s), 13.8MiB/s-20.3MiB/s (14.4MB/s-21.3MB/s), io=70.2MiB (73.6MB), run=1003-1010msec 00:09:07.355 00:09:07.355 Disk stats (read/write): 00:09:07.355 nvme0n1: ios=4624/4615, merge=0/0, ticks=53770/45107, in_queue=98877, util=97.90% 00:09:07.355 nvme0n2: ios=3989/4096, merge=0/0, ticks=24532/26667, in_queue=51199, util=98.17% 00:09:07.355 nvme0n3: ios=2560/3038, merge=0/0, ticks=27507/32699, in_queue=60206, util=88.95% 00:09:07.355 nvme0n4: ios=2831/3072, merge=0/0, ticks=41815/57199, in_queue=99014, util=98.11% 00:09:07.355 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:07.355 [global] 00:09:07.355 thread=1 00:09:07.355 invalidate=1 00:09:07.355 rw=randwrite 00:09:07.355 time_based=1 00:09:07.355 runtime=1 00:09:07.355 ioengine=libaio 00:09:07.355 direct=1 00:09:07.355 bs=4096 00:09:07.355 iodepth=128 00:09:07.355 norandommap=0 00:09:07.355 numjobs=1 00:09:07.355 00:09:07.355 verify_dump=1 00:09:07.355 verify_backlog=512 00:09:07.355 verify_state_save=0 00:09:07.355 do_verify=1 00:09:07.355 verify=crc32c-intel 00:09:07.355 [job0] 00:09:07.355 filename=/dev/nvme0n1 00:09:07.355 [job1] 00:09:07.355 filename=/dev/nvme0n2 00:09:07.355 [job2] 00:09:07.355 filename=/dev/nvme0n3 00:09:07.355 [job3] 00:09:07.355 filename=/dev/nvme0n4 00:09:07.355 Could not set queue depth (nvme0n1) 00:09:07.355 Could not set queue depth (nvme0n2) 00:09:07.355 Could not set queue depth (nvme0n3) 00:09:07.355 Could not set queue depth (nvme0n4) 00:09:07.613 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.613 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.613 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.613 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.613 fio-3.35 00:09:07.613 Starting 4 threads 00:09:08.988 00:09:08.988 job0: (groupid=0, jobs=1): err= 0: pid=1407431: Thu Dec 12 10:22:42 2024 00:09:08.988 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:09:08.988 slat (nsec): min=1457, max=10561k, avg=99282.82, stdev=628267.93 00:09:08.988 clat (usec): min=4308, max=37496, avg=11324.65, stdev=4928.70 00:09:08.988 lat (usec): min=4317, max=37503, avg=11423.93, stdev=4980.08 00:09:08.988 clat percentiles (usec): 00:09:08.988 | 1.00th=[ 5276], 5.00th=[ 6718], 10.00th=[ 8029], 20.00th=[ 8455], 00:09:08.988 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[10683], 00:09:08.988 | 70.00th=[11600], 80.00th=[13304], 90.00th=[17433], 95.00th=[22152], 00:09:08.988 | 99.00th=[32113], 99.50th=[34341], 99.90th=[37487], 99.95th=[37487], 00:09:08.988 | 99.99th=[37487] 00:09:08.988 write: IOPS=4861, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1007msec); 0 zone resets 00:09:08.988 slat (usec): min=2, max=10509, avg=103.44, stdev=470.11 00:09:08.988 clat (usec): min=185, max=37436, avg=15438.52, stdev=7017.24 00:09:08.988 lat (usec): min=214, max=37440, avg=15541.96, stdev=7066.30 00:09:08.988 clat percentiles (usec): 00:09:08.988 | 1.00th=[ 1876], 5.00th=[ 5735], 10.00th=[ 7111], 20.00th=[ 8029], 00:09:08.988 | 30.00th=[11469], 40.00th=[14746], 50.00th=[15664], 60.00th=[15795], 00:09:08.988 | 70.00th=[18220], 80.00th=[21365], 90.00th=[26084], 95.00th=[28443], 00:09:08.988 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32113], 99.95th=[36963], 00:09:08.988 | 99.99th=[37487] 00:09:08.988 bw ( KiB/s): min=18704, max=19440, per=31.06%, avg=19072.00, stdev=520.43, samples=2 00:09:08.988 iops : min= 4676, max= 4860, avg=4768.00, stdev=130.11, samples=2 00:09:08.988 lat (usec) : 250=0.02%, 500=0.03%, 750=0.06% 00:09:08.988 lat (msec) : 2=0.46%, 4=1.09%, 10=39.72%, 20=43.34%, 50=15.27% 00:09:08.988 cpu : usr=3.68%, sys=6.06%, ctx=544, majf=0, minf=1 00:09:08.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:08.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.988 issued rwts: total=4608,4896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.988 job1: (groupid=0, jobs=1): err= 0: pid=1407432: Thu Dec 12 10:22:42 2024 00:09:08.988 read: IOPS=3087, BW=12.1MiB/s (12.6MB/s)(12.6MiB/1045msec) 00:09:08.988 slat (nsec): min=1521, max=17729k, avg=109151.50, stdev=721585.49 00:09:08.988 clat (usec): min=7361, max=60264, avg=15379.03, stdev=9739.66 00:09:08.988 lat (usec): min=7364, max=60590, avg=15488.18, stdev=9784.18 00:09:08.988 clat percentiles (usec): 00:09:08.988 | 1.00th=[ 7963], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10290], 00:09:08.988 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11338], 00:09:08.988 | 70.00th=[14222], 80.00th=[18220], 90.00th=[29230], 95.00th=[33162], 00:09:08.988 | 99.00th=[56886], 99.50th=[57934], 99.90th=[58983], 99.95th=[60031], 00:09:08.988 | 99.99th=[60031] 00:09:08.988 write: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1045msec); 0 zone resets 00:09:08.988 slat (usec): min=2, max=18773, avg=175.39, stdev=851.94 00:09:08.988 clat (usec): min=8943, max=76504, avg=22808.58, stdev=12264.39 00:09:08.988 lat (usec): min=8952, max=77324, avg=22983.96, stdev=12326.22 00:09:08.988 clat percentiles (usec): 00:09:08.988 | 1.00th=[ 9896], 5.00th=[13042], 10.00th=[14746], 20.00th=[15533], 00:09:08.988 | 30.00th=[15664], 40.00th=[15795], 50.00th=[17171], 60.00th=[19268], 00:09:08.988 | 70.00th=[22938], 80.00th=[31327], 90.00th=[42206], 95.00th=[43254], 00:09:08.988 | 99.00th=[72877], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:09:08.988 | 99.99th=[76022] 00:09:08.988 bw ( KiB/s): min=12808, max=15864, per=23.35%, avg=14336.00, stdev=2160.92, samples=2 00:09:08.988 iops : min= 3202, max= 3966, avg=3584.00, stdev=540.23, samples=2 00:09:08.988 lat (msec) : 10=5.40%, 20=67.83%, 50=23.58%, 100=3.19% 00:09:08.988 cpu : usr=3.07%, sys=3.64%, ctx=477, majf=0, minf=1 00:09:08.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:08.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.988 issued rwts: total=3226,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.988 job2: (groupid=0, jobs=1): err= 0: pid=1407439: Thu Dec 12 10:22:42 2024 00:09:08.988 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:09:08.988 slat (nsec): min=1145, max=19124k, avg=156540.09, stdev=1023529.77 00:09:08.988 clat (usec): min=6029, max=88956, avg=19505.76, stdev=13205.51 00:09:08.988 lat (usec): min=6035, max=88981, avg=19662.30, stdev=13307.06 00:09:08.988 clat percentiles (usec): 00:09:08.988 | 1.00th=[ 7767], 5.00th=[10028], 10.00th=[10814], 20.00th=[12518], 00:09:08.988 | 30.00th=[13960], 40.00th=[14746], 50.00th=[15401], 60.00th=[15795], 00:09:08.988 | 70.00th=[16909], 80.00th=[19792], 90.00th=[38536], 95.00th=[44303], 00:09:08.988 | 99.00th=[79168], 99.50th=[79168], 99.90th=[79168], 99.95th=[82314], 00:09:08.988 | 99.99th=[88605] 00:09:08.988 write: IOPS=2926, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1009msec); 0 zone resets 00:09:08.988 slat (usec): min=2, max=20787, avg=196.79, stdev=1178.62 00:09:08.988 clat (usec): min=5150, max=79789, avg=25499.81, stdev=12973.46 00:09:08.988 lat (usec): min=7026, max=79810, avg=25696.60, stdev=13061.45 00:09:08.988 clat percentiles (usec): 00:09:08.988 | 1.00th=[ 9765], 5.00th=[11469], 10.00th=[13042], 20.00th=[15664], 00:09:08.988 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18744], 60.00th=[24511], 00:09:08.988 | 70.00th=[31589], 80.00th=[35914], 90.00th=[45876], 95.00th=[47973], 00:09:08.988 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[79168], 00:09:08.988 | 99.99th=[80217] 00:09:08.988 bw ( KiB/s): min= 8192, max=14416, per=18.41%, avg=11304.00, stdev=4401.03, samples=2 00:09:08.988 iops : min= 2048, max= 3604, avg=2826.00, stdev=1100.26, samples=2 00:09:08.988 lat (msec) : 10=3.08%, 20=63.87%, 50=28.99%, 100=4.06% 00:09:08.988 cpu : usr=1.69%, sys=3.08%, ctx=334, majf=0, minf=1 00:09:08.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:08.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.988 issued rwts: total=2560,2953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.988 job3: (groupid=0, jobs=1): err= 0: pid=1407440: Thu Dec 12 10:22:42 2024 00:09:08.988 read: IOPS=4538, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1005msec) 00:09:08.988 slat (nsec): min=1595, max=12421k, avg=109671.81, stdev=768342.87 00:09:08.988 clat (usec): min=539, max=36893, avg=13096.56, stdev=4940.66 00:09:08.988 lat (usec): min=4622, max=36899, avg=13206.24, stdev=4994.64 00:09:08.988 clat percentiles (usec): 00:09:08.988 | 1.00th=[ 5407], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10159], 00:09:08.988 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:09:08.988 | 70.00th=[12387], 80.00th=[16188], 90.00th=[20579], 95.00th=[23987], 00:09:08.988 | 99.00th=[30016], 99.50th=[32637], 99.90th=[36963], 99.95th=[36963], 00:09:08.988 | 99.99th=[36963] 00:09:08.988 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:09:08.988 slat (usec): min=2, max=9403, avg=101.88, stdev=547.10 00:09:08.988 clat (usec): min=1657, max=36878, avg=14695.53, stdev=5824.09 00:09:08.988 lat (usec): min=1670, max=36882, avg=14797.40, stdev=5871.43 00:09:08.988 clat percentiles (usec): 00:09:08.988 | 1.00th=[ 3949], 5.00th=[ 6194], 10.00th=[ 7963], 20.00th=[ 9110], 00:09:08.988 | 30.00th=[10814], 40.00th=[11338], 50.00th=[13960], 60.00th=[17695], 00:09:08.988 | 70.00th=[18220], 80.00th=[20055], 90.00th=[22676], 95.00th=[24511], 00:09:08.988 | 99.00th=[27657], 99.50th=[28705], 99.90th=[30278], 99.95th=[30540], 00:09:08.988 | 99.99th=[36963] 00:09:08.988 bw ( KiB/s): min=16384, max=20480, per=30.02%, avg=18432.00, stdev=2896.31, samples=2 00:09:08.988 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:08.988 lat (usec) : 750=0.01% 00:09:08.988 lat (msec) : 2=0.02%, 4=0.58%, 10=19.63%, 20=64.28%, 50=15.48% 00:09:08.988 cpu : usr=4.08%, sys=5.58%, ctx=425, majf=0, minf=2 00:09:08.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:08.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.989 issued rwts: total=4561,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.989 00:09:08.989 Run status group 0 (all jobs): 00:09:08.989 READ: bw=55.9MiB/s (58.6MB/s), 9.91MiB/s-17.9MiB/s (10.4MB/s-18.7MB/s), io=58.4MiB (61.3MB), run=1005-1045msec 00:09:08.989 WRITE: bw=60.0MiB/s (62.9MB/s), 11.4MiB/s-19.0MiB/s (12.0MB/s-19.9MB/s), io=62.7MiB (65.7MB), run=1005-1045msec 00:09:08.989 00:09:08.989 Disk stats (read/write): 00:09:08.989 nvme0n1: ios=4003/4096, merge=0/0, ticks=43367/61943, in_queue=105310, util=86.87% 00:09:08.989 nvme0n2: ios=2763/3072, merge=0/0, ticks=18056/34858, in_queue=52914, util=91.26% 00:09:08.989 nvme0n3: ios=2084/2290, merge=0/0, ticks=16181/20078, in_queue=36259, util=100.00% 00:09:08.989 nvme0n4: ios=3703/4096, merge=0/0, ticks=44926/60067, in_queue=104993, util=89.72% 00:09:08.989 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:08.989 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1407660 00:09:08.989 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:08.989 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:08.989 [global] 00:09:08.989 thread=1 00:09:08.989 invalidate=1 00:09:08.989 rw=read 00:09:08.989 time_based=1 00:09:08.989 runtime=10 00:09:08.989 ioengine=libaio 00:09:08.989 direct=1 00:09:08.989 bs=4096 00:09:08.989 iodepth=1 00:09:08.989 norandommap=1 00:09:08.989 numjobs=1 00:09:08.989 00:09:08.989 [job0] 00:09:08.989 filename=/dev/nvme0n1 00:09:08.989 [job1] 00:09:08.989 filename=/dev/nvme0n2 00:09:08.989 [job2] 00:09:08.989 filename=/dev/nvme0n3 00:09:08.989 [job3] 00:09:08.989 filename=/dev/nvme0n4 00:09:08.989 Could not set queue depth (nvme0n1) 00:09:08.989 Could not set queue depth (nvme0n2) 00:09:08.989 Could not set queue depth (nvme0n3) 00:09:08.989 Could not set queue depth (nvme0n4) 00:09:09.247 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.247 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.247 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.247 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.247 fio-3.35 00:09:09.247 Starting 4 threads 00:09:12.535 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:12.535 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43601920, buflen=4096 00:09:12.535 fio: pid=1407803, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:12.535 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:12.535 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:12.535 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:12.535 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=6995968, buflen=4096 00:09:12.535 fio: pid=1407802, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:12.535 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=311296, buflen=4096 00:09:12.535 fio: pid=1407800, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:12.535 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:12.535 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:12.793 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:12.793 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:12.793 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=54022144, buflen=4096 00:09:12.793 fio: pid=1407801, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:12.793 00:09:12.793 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1407800: Thu Dec 12 10:22:46 2024 00:09:12.793 read: IOPS=24, BW=97.6KiB/s (99.9kB/s)(304KiB/3115msec) 00:09:12.793 slat (usec): min=8, max=11856, avg=174.41, stdev=1348.85 00:09:12.793 clat (usec): min=535, max=41894, avg=40463.66, stdev=4643.02 00:09:12.794 lat (usec): min=560, max=53152, avg=40640.07, stdev=4864.03 00:09:12.794 clat percentiles (usec): 00:09:12.794 | 1.00th=[ 537], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:12.794 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:12.794 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:12.794 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:12.794 | 99.99th=[41681] 00:09:12.794 bw ( KiB/s): min= 93, max= 104, per=0.32%, avg=98.17, stdev= 4.67, samples=6 00:09:12.794 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:09:12.794 lat (usec) : 750=1.30% 00:09:12.794 lat (msec) : 50=97.40% 00:09:12.794 cpu : usr=0.13%, sys=0.00%, ctx=79, majf=0, minf=1 00:09:12.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.794 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.794 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.794 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1407801: Thu Dec 12 10:22:46 2024 00:09:12.794 read: IOPS=3942, BW=15.4MiB/s (16.1MB/s)(51.5MiB/3346msec) 00:09:12.794 slat (usec): min=6, max=7602, avg=11.69, stdev=139.75 00:09:12.794 clat (usec): min=156, max=41977, avg=240.23, stdev=1137.68 00:09:12.794 lat (usec): min=165, max=48597, avg=251.43, stdev=1183.54 00:09:12.794 clat percentiles (usec): 00:09:12.794 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 192], 00:09:12.794 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:09:12.794 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 241], 00:09:12.794 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 396], 99.95th=[41157], 00:09:12.794 | 99.99th=[41681] 00:09:12.794 bw ( KiB/s): min=12348, max=18808, per=56.08%, avg=17175.33, stdev=2458.18, samples=6 00:09:12.794 iops : min= 3087, max= 4702, avg=4293.83, stdev=614.54, samples=6 00:09:12.794 lat (usec) : 250=98.69%, 500=1.21% 00:09:12.794 lat (msec) : 2=0.01%, 10=0.02%, 50=0.08% 00:09:12.794 cpu : usr=2.21%, sys=6.07%, ctx=13197, majf=0, minf=2 00:09:12.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.794 issued rwts: total=13190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.794 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1407802: Thu Dec 12 10:22:46 2024 00:09:12.794 read: IOPS=583, BW=2331KiB/s (2387kB/s)(6832KiB/2931msec) 00:09:12.794 slat (usec): min=6, max=11811, avg=14.63, stdev=285.53 00:09:12.794 clat (usec): min=189, max=41426, avg=1687.09, stdev=7436.51 00:09:12.794 lat (usec): min=196, max=52975, avg=1701.71, stdev=7480.46 00:09:12.794 clat percentiles (usec): 00:09:12.794 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 243], 00:09:12.794 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:09:12.794 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 424], 95.00th=[ 486], 00:09:12.794 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:09:12.794 | 99.99th=[41681] 00:09:12.794 bw ( KiB/s): min= 96, max=11208, per=8.87%, avg=2716.80, stdev=4823.64, samples=5 00:09:12.794 iops : min= 24, max= 2802, avg=679.20, stdev=1205.91, samples=5 00:09:12.794 lat (usec) : 250=26.51%, 500=69.51%, 750=0.47% 00:09:12.794 lat (msec) : 50=3.45% 00:09:12.794 cpu : usr=0.17%, sys=0.55%, ctx=1711, majf=0, minf=2 00:09:12.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.794 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.794 issued rwts: total=1709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.794 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1407803: Thu Dec 12 10:22:46 2024 00:09:12.794 read: IOPS=3931, BW=15.4MiB/s (16.1MB/s)(41.6MiB/2708msec) 00:09:12.794 slat (nsec): min=6387, max=30987, avg=7353.87, stdev=949.65 00:09:12.794 clat (usec): min=183, max=41422, avg=243.86, stdev=563.70 00:09:12.794 lat (usec): min=190, max=41431, avg=251.22, stdev=563.82 00:09:12.794 clat percentiles (usec): 00:09:12.794 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:09:12.794 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:09:12.794 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 314], 00:09:12.794 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 510], 99.95th=[ 529], 00:09:12.794 | 99.99th=[41157] 00:09:12.794 bw ( KiB/s): min=13408, max=17416, per=51.37%, avg=15732.80, stdev=1754.21, samples=5 00:09:12.794 iops : min= 3352, max= 4354, avg=3933.20, stdev=438.55, samples=5 00:09:12.794 lat (usec) : 250=85.04%, 500=14.73%, 750=0.21% 00:09:12.794 lat (msec) : 50=0.02% 00:09:12.794 cpu : usr=1.03%, sys=3.55%, ctx=10646, majf=0, minf=2 00:09:12.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.794 issued rwts: total=10646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.794 00:09:12.794 Run status group 0 (all jobs): 00:09:12.794 READ: bw=29.9MiB/s (31.4MB/s), 97.6KiB/s-15.4MiB/s (99.9kB/s-16.1MB/s), io=100MiB (105MB), run=2708-3346msec 00:09:12.794 00:09:12.794 Disk stats (read/write): 00:09:12.794 nvme0n1: ios=77/0, merge=0/0, ticks=3085/0, in_queue=3085, util=95.13% 00:09:12.794 nvme0n2: ios=13195/0, merge=0/0, ticks=2972/0, in_queue=2972, util=98.54% 00:09:12.794 nvme0n3: ios=1706/0, merge=0/0, ticks=2789/0, in_queue=2789, util=96.17% 00:09:12.794 nvme0n4: ios=10261/0, merge=0/0, ticks=2447/0, in_queue=2447, util=96.44% 00:09:13.052 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:13.052 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:13.052 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:13.311 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:13.311 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:13.311 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:13.569 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:13.569 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1407660 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:13.828 nvmf hotplug test: fio failed as expected 00:09:13.828 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.086 rmmod nvme_tcp 00:09:14.086 rmmod nvme_fabrics 00:09:14.086 rmmod nvme_keyring 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1404796 ']' 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1404796 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1404796 ']' 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1404796 00:09:14.086 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:14.087 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.087 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1404796 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1404796' 00:09:14.345 killing process with pid 1404796 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1404796 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1404796 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.345 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.880 00:09:16.880 real 0m26.879s 00:09:16.880 user 1m47.692s 00:09:16.880 sys 0m8.703s 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.880 ************************************ 00:09:16.880 END TEST nvmf_fio_target 00:09:16.880 ************************************ 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.880 ************************************ 00:09:16.880 START TEST nvmf_bdevio 00:09:16.880 ************************************ 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:16.880 * Looking for test storage... 00:09:16.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:16.880 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.881 --rc genhtml_branch_coverage=1 00:09:16.881 --rc genhtml_function_coverage=1 00:09:16.881 --rc genhtml_legend=1 00:09:16.881 --rc geninfo_all_blocks=1 00:09:16.881 --rc geninfo_unexecuted_blocks=1 00:09:16.881 00:09:16.881 ' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.881 --rc genhtml_branch_coverage=1 00:09:16.881 --rc genhtml_function_coverage=1 00:09:16.881 --rc genhtml_legend=1 00:09:16.881 --rc geninfo_all_blocks=1 00:09:16.881 --rc geninfo_unexecuted_blocks=1 00:09:16.881 00:09:16.881 ' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.881 --rc genhtml_branch_coverage=1 00:09:16.881 --rc genhtml_function_coverage=1 00:09:16.881 --rc genhtml_legend=1 00:09:16.881 --rc geninfo_all_blocks=1 00:09:16.881 --rc geninfo_unexecuted_blocks=1 00:09:16.881 00:09:16.881 ' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.881 --rc genhtml_branch_coverage=1 00:09:16.881 --rc genhtml_function_coverage=1 00:09:16.881 --rc genhtml_legend=1 00:09:16.881 --rc geninfo_all_blocks=1 00:09:16.881 --rc geninfo_unexecuted_blocks=1 00:09:16.881 00:09:16.881 ' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.881 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.451 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:23.452 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:23.452 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:23.452 Found net devices under 0000:af:00.0: cvl_0_0 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:23.452 Found net devices under 0000:af:00.1: cvl_0_1 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:09:23.452 00:09:23.452 --- 10.0.0.2 ping statistics --- 00:09:23.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.452 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:09:23.452 00:09:23.452 --- 10.0.0.1 ping statistics --- 00:09:23.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.452 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1412180 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:23.452 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1412180 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1412180 ']' 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.453 [2024-12-12 10:22:56.705458] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:23.453 [2024-12-12 10:22:56.705508] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.453 [2024-12-12 10:22:56.781204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.453 [2024-12-12 10:22:56.822983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.453 [2024-12-12 10:22:56.823021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.453 [2024-12-12 10:22:56.823028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.453 [2024-12-12 10:22:56.823034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.453 [2024-12-12 10:22:56.823039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.453 [2024-12-12 10:22:56.824554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:23.453 [2024-12-12 10:22:56.824665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:23.453 [2024-12-12 10:22:56.824773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.453 [2024-12-12 10:22:56.824774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.453 [2024-12-12 10:22:56.965927] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.453 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.453 Malloc0 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.453 [2024-12-12 10:22:57.031632] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.453 { 00:09:23.453 "params": { 00:09:23.453 "name": "Nvme$subsystem", 00:09:23.453 "trtype": "$TEST_TRANSPORT", 00:09:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.453 "adrfam": "ipv4", 00:09:23.453 "trsvcid": "$NVMF_PORT", 00:09:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.453 "hdgst": ${hdgst:-false}, 00:09:23.453 "ddgst": ${ddgst:-false} 00:09:23.453 }, 00:09:23.453 "method": "bdev_nvme_attach_controller" 00:09:23.453 } 00:09:23.453 EOF 00:09:23.453 )") 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:23.453 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.453 "params": { 00:09:23.453 "name": "Nvme1", 00:09:23.453 "trtype": "tcp", 00:09:23.453 "traddr": "10.0.0.2", 00:09:23.453 "adrfam": "ipv4", 00:09:23.453 "trsvcid": "4420", 00:09:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.453 "hdgst": false, 00:09:23.453 "ddgst": false 00:09:23.453 }, 00:09:23.453 "method": "bdev_nvme_attach_controller" 00:09:23.453 }' 00:09:23.453 [2024-12-12 10:22:57.080753] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:23.453 [2024-12-12 10:22:57.080795] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412210 ] 00:09:23.453 [2024-12-12 10:22:57.153957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.453 [2024-12-12 10:22:57.197186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.453 [2024-12-12 10:22:57.197294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.453 [2024-12-12 10:22:57.197295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.453 I/O targets: 00:09:23.453 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:23.453 00:09:23.453 00:09:23.453 CUnit - A unit testing framework for C - Version 2.1-3 00:09:23.453 http://cunit.sourceforge.net/ 00:09:23.453 00:09:23.453 00:09:23.453 Suite: bdevio tests on: Nvme1n1 00:09:23.453 Test: blockdev write read block ...passed 00:09:23.712 Test: blockdev write zeroes read block ...passed 00:09:23.712 Test: blockdev write zeroes read no split ...passed 00:09:23.712 Test: blockdev write zeroes read split ...passed 00:09:23.712 Test: blockdev write zeroes read split partial ...passed 00:09:23.712 Test: blockdev reset ...[2024-12-12 10:22:57.513943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:23.712 [2024-12-12 10:22:57.514004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b610 (9): Bad file descriptor 00:09:23.712 [2024-12-12 10:22:57.529257] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:23.712 passed 00:09:23.712 Test: blockdev write read 8 blocks ...passed 00:09:23.712 Test: blockdev write read size > 128k ...passed 00:09:23.712 Test: blockdev write read invalid size ...passed 00:09:23.712 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:23.712 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:23.712 Test: blockdev write read max offset ...passed 00:09:23.712 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:23.972 Test: blockdev writev readv 8 blocks ...passed 00:09:23.972 Test: blockdev writev readv 30 x 1block ...passed 00:09:23.972 Test: blockdev writev readv block ...passed 00:09:23.972 Test: blockdev writev readv size > 128k ...passed 00:09:23.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:23.972 Test: blockdev comparev and writev ...[2024-12-12 10:22:57.781354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:23.972 [2024-12-12 10:22:57.781384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.781398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:23.972 [2024-12-12 10:22:57.781406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.781650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:23.972 [2024-12-12 10:22:57.781662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.781673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:23.972 [2024-12-12 10:22:57.781680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.781912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:23.972 [2024-12-12 10:22:57.781922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.781934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:23.972 [2024-12-12 10:22:57.781941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.782165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:23.972 [2024-12-12 10:22:57.782175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.782187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:23.972 [2024-12-12 10:22:57.782199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:23.972 passed 00:09:23.972 Test: blockdev nvme passthru rw ...passed 00:09:23.972 Test: blockdev nvme passthru vendor specific ...[2024-12-12 10:22:57.864088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:23.972 [2024-12-12 10:22:57.864105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.864208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:23.972 [2024-12-12 10:22:57.864218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.864321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:23.972 [2024-12-12 10:22:57.864332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:23.972 [2024-12-12 10:22:57.864426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:23.972 [2024-12-12 10:22:57.864437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:23.972 passed 00:09:23.972 Test: blockdev nvme admin passthru ...passed 00:09:23.972 Test: blockdev copy ...passed 00:09:23.972 00:09:23.972 Run Summary: Type Total Ran Passed Failed Inactive 00:09:23.972 suites 1 1 n/a 0 0 00:09:23.972 tests 23 23 23 0 0 00:09:23.972 asserts 152 152 152 0 n/a 00:09:23.972 00:09:23.972 Elapsed time = 1.035 seconds 00:09:24.231 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.231 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.231 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.231 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.231 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:24.231 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:24.231 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.231 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.232 rmmod nvme_tcp 00:09:24.232 rmmod nvme_fabrics 00:09:24.232 rmmod nvme_keyring 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1412180 ']' 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1412180 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1412180 ']' 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1412180 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1412180 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1412180' 00:09:24.232 killing process with pid 1412180 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1412180 00:09:24.232 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1412180 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.491 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.397 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.397 00:09:26.397 real 0m9.957s 00:09:26.397 user 0m9.872s 00:09:26.397 sys 0m4.900s 00:09:26.397 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.397 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.397 ************************************ 00:09:26.397 END TEST nvmf_bdevio 00:09:26.397 ************************************ 00:09:26.656 10:23:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:26.656 00:09:26.656 real 4m35.023s 00:09:26.656 user 10m20.787s 00:09:26.656 sys 1m37.324s 00:09:26.656 10:23:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.656 10:23:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.656 ************************************ 00:09:26.656 END TEST nvmf_target_core 00:09:26.656 ************************************ 00:09:26.656 10:23:00 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:26.656 10:23:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.657 10:23:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.657 10:23:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:26.657 ************************************ 00:09:26.657 START TEST nvmf_target_extra 00:09:26.657 ************************************ 00:09:26.657 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:26.657 * Looking for test storage... 00:09:26.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:26.657 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.657 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.657 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.916 --rc genhtml_branch_coverage=1 00:09:26.916 --rc genhtml_function_coverage=1 00:09:26.916 --rc genhtml_legend=1 00:09:26.916 --rc geninfo_all_blocks=1 00:09:26.916 --rc geninfo_unexecuted_blocks=1 00:09:26.916 00:09:26.916 ' 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.916 --rc genhtml_branch_coverage=1 00:09:26.916 --rc genhtml_function_coverage=1 00:09:26.916 --rc genhtml_legend=1 00:09:26.916 --rc geninfo_all_blocks=1 00:09:26.916 --rc geninfo_unexecuted_blocks=1 00:09:26.916 00:09:26.916 ' 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.916 --rc genhtml_branch_coverage=1 00:09:26.916 --rc genhtml_function_coverage=1 00:09:26.916 --rc genhtml_legend=1 00:09:26.916 --rc geninfo_all_blocks=1 00:09:26.916 --rc geninfo_unexecuted_blocks=1 00:09:26.916 00:09:26.916 ' 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.916 --rc genhtml_branch_coverage=1 00:09:26.916 --rc genhtml_function_coverage=1 00:09:26.916 --rc genhtml_legend=1 00:09:26.916 --rc geninfo_all_blocks=1 00:09:26.916 --rc geninfo_unexecuted_blocks=1 00:09:26.916 00:09:26.916 ' 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.916 10:23:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:26.917 ************************************ 00:09:26.917 START TEST nvmf_example 00:09:26.917 ************************************ 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:26.917 * Looking for test storage... 00:09:26.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.917 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.177 --rc genhtml_branch_coverage=1 00:09:27.177 --rc genhtml_function_coverage=1 00:09:27.177 --rc genhtml_legend=1 00:09:27.177 --rc geninfo_all_blocks=1 00:09:27.177 --rc geninfo_unexecuted_blocks=1 00:09:27.177 00:09:27.177 ' 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.177 --rc genhtml_branch_coverage=1 00:09:27.177 --rc genhtml_function_coverage=1 00:09:27.177 --rc genhtml_legend=1 00:09:27.177 --rc geninfo_all_blocks=1 00:09:27.177 --rc geninfo_unexecuted_blocks=1 00:09:27.177 00:09:27.177 ' 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.177 --rc genhtml_branch_coverage=1 00:09:27.177 --rc genhtml_function_coverage=1 00:09:27.177 --rc genhtml_legend=1 00:09:27.177 --rc geninfo_all_blocks=1 00:09:27.177 --rc geninfo_unexecuted_blocks=1 00:09:27.177 00:09:27.177 ' 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.177 --rc genhtml_branch_coverage=1 00:09:27.177 --rc genhtml_function_coverage=1 00:09:27.177 --rc genhtml_legend=1 00:09:27.177 --rc geninfo_all_blocks=1 00:09:27.177 --rc geninfo_unexecuted_blocks=1 00:09:27.177 00:09:27.177 ' 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.177 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.178 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.178 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.178 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.178 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.178 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:33.813 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:33.813 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:33.813 Found net devices under 0000:af:00.0: cvl_0_0 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:33.813 Found net devices under 0000:af:00.1: cvl_0_1 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.813 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:09:33.814 00:09:33.814 --- 10.0.0.2 ping statistics --- 00:09:33.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.814 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:33.814 00:09:33.814 --- 10.0.0.1 ping statistics --- 00:09:33.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.814 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1415962 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1415962 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1415962 ']' 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.814 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:34.072 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:34.073 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:46.278 Initializing NVMe Controllers 00:09:46.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:46.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:46.278 Initialization complete. Launching workers. 00:09:46.278 ======================================================== 00:09:46.278 Latency(us) 00:09:46.278 Device Information : IOPS MiB/s Average min max 00:09:46.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18477.58 72.18 3463.15 651.34 15790.59 00:09:46.278 ======================================================== 00:09:46.278 Total : 18477.58 72.18 3463.15 651.34 15790.59 00:09:46.278 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.278 rmmod nvme_tcp 00:09:46.278 rmmod nvme_fabrics 00:09:46.278 rmmod nvme_keyring 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1415962 ']' 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1415962 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1415962 ']' 00:09:46.278 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1415962 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1415962 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1415962' 00:09:46.279 killing process with pid 1415962 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1415962 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1415962 00:09:46.279 nvmf threads initialize successfully 00:09:46.279 bdev subsystem init successfully 00:09:46.279 created a nvmf target service 00:09:46.279 create targets's poll groups done 00:09:46.279 all subsystems of target started 00:09:46.279 nvmf target is running 00:09:46.279 all subsystems of target stopped 00:09:46.279 destroy targets's poll groups done 00:09:46.279 destroyed the nvmf target service 00:09:46.279 bdev subsystem finish successfully 00:09:46.279 nvmf threads destroy successfully 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.279 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.846 00:09:46.846 real 0m19.854s 00:09:46.846 user 0m46.120s 00:09:46.846 sys 0m6.082s 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.846 ************************************ 00:09:46.846 END TEST nvmf_example 00:09:46.846 ************************************ 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:46.846 ************************************ 00:09:46.846 START TEST nvmf_filesystem 00:09:46.846 ************************************ 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:46.846 * Looking for test storage... 00:09:46.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.846 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.847 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.847 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.109 --rc genhtml_branch_coverage=1 00:09:47.109 --rc genhtml_function_coverage=1 00:09:47.109 --rc genhtml_legend=1 00:09:47.109 --rc geninfo_all_blocks=1 00:09:47.109 --rc geninfo_unexecuted_blocks=1 00:09:47.109 00:09:47.109 ' 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.109 --rc genhtml_branch_coverage=1 00:09:47.109 --rc genhtml_function_coverage=1 00:09:47.109 --rc genhtml_legend=1 00:09:47.109 --rc geninfo_all_blocks=1 00:09:47.109 --rc geninfo_unexecuted_blocks=1 00:09:47.109 00:09:47.109 ' 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.109 --rc genhtml_branch_coverage=1 00:09:47.109 --rc genhtml_function_coverage=1 00:09:47.109 --rc genhtml_legend=1 00:09:47.109 --rc geninfo_all_blocks=1 00:09:47.109 --rc geninfo_unexecuted_blocks=1 00:09:47.109 00:09:47.109 ' 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.109 --rc genhtml_branch_coverage=1 00:09:47.109 --rc genhtml_function_coverage=1 00:09:47.109 --rc genhtml_legend=1 00:09:47.109 --rc geninfo_all_blocks=1 00:09:47.109 --rc geninfo_unexecuted_blocks=1 00:09:47.109 00:09:47.109 ' 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:47.109 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:47.110 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:47.110 #define SPDK_CONFIG_H 00:09:47.110 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:47.110 #define SPDK_CONFIG_APPS 1 00:09:47.110 #define SPDK_CONFIG_ARCH native 00:09:47.110 #undef SPDK_CONFIG_ASAN 00:09:47.110 #undef SPDK_CONFIG_AVAHI 00:09:47.110 #undef SPDK_CONFIG_CET 00:09:47.110 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:47.110 #define SPDK_CONFIG_COVERAGE 1 00:09:47.110 #define SPDK_CONFIG_CROSS_PREFIX 00:09:47.110 #undef SPDK_CONFIG_CRYPTO 00:09:47.110 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:47.110 #undef SPDK_CONFIG_CUSTOMOCF 00:09:47.110 #undef SPDK_CONFIG_DAOS 00:09:47.110 #define SPDK_CONFIG_DAOS_DIR 00:09:47.110 #define SPDK_CONFIG_DEBUG 1 00:09:47.110 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:47.110 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:47.110 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:47.110 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:47.110 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:47.110 #undef SPDK_CONFIG_DPDK_UADK 00:09:47.110 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:47.110 #define SPDK_CONFIG_EXAMPLES 1 00:09:47.110 #undef SPDK_CONFIG_FC 00:09:47.110 #define SPDK_CONFIG_FC_PATH 00:09:47.110 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:47.110 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:47.110 #define SPDK_CONFIG_FSDEV 1 00:09:47.110 #undef SPDK_CONFIG_FUSE 00:09:47.110 #undef SPDK_CONFIG_FUZZER 00:09:47.110 #define SPDK_CONFIG_FUZZER_LIB 00:09:47.110 #undef SPDK_CONFIG_GOLANG 00:09:47.110 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:47.110 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:47.110 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:47.110 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:47.110 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:47.110 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:47.110 #undef SPDK_CONFIG_HAVE_LZ4 00:09:47.110 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:47.110 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:47.110 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:47.110 #define SPDK_CONFIG_IDXD 1 00:09:47.110 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:47.110 #undef SPDK_CONFIG_IPSEC_MB 00:09:47.110 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:47.110 #define SPDK_CONFIG_ISAL 1 00:09:47.110 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:47.110 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:47.110 #define SPDK_CONFIG_LIBDIR 00:09:47.110 #undef SPDK_CONFIG_LTO 00:09:47.110 #define SPDK_CONFIG_MAX_LCORES 128 00:09:47.110 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:47.110 #define SPDK_CONFIG_NVME_CUSE 1 00:09:47.110 #undef SPDK_CONFIG_OCF 00:09:47.110 #define SPDK_CONFIG_OCF_PATH 00:09:47.110 #define SPDK_CONFIG_OPENSSL_PATH 00:09:47.110 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:47.110 #define SPDK_CONFIG_PGO_DIR 00:09:47.110 #undef SPDK_CONFIG_PGO_USE 00:09:47.110 #define SPDK_CONFIG_PREFIX /usr/local 00:09:47.110 #undef SPDK_CONFIG_RAID5F 00:09:47.110 #undef SPDK_CONFIG_RBD 00:09:47.110 #define SPDK_CONFIG_RDMA 1 00:09:47.110 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:47.110 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:47.110 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:47.110 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:47.110 #define SPDK_CONFIG_SHARED 1 00:09:47.110 #undef SPDK_CONFIG_SMA 00:09:47.110 #define SPDK_CONFIG_TESTS 1 00:09:47.110 #undef SPDK_CONFIG_TSAN 00:09:47.110 #define SPDK_CONFIG_UBLK 1 00:09:47.110 #define SPDK_CONFIG_UBSAN 1 00:09:47.110 #undef SPDK_CONFIG_UNIT_TESTS 00:09:47.110 #undef SPDK_CONFIG_URING 00:09:47.110 #define SPDK_CONFIG_URING_PATH 00:09:47.111 #undef SPDK_CONFIG_URING_ZNS 00:09:47.111 #undef SPDK_CONFIG_USDT 00:09:47.111 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:47.111 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:47.111 #define SPDK_CONFIG_VFIO_USER 1 00:09:47.111 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:47.111 #define SPDK_CONFIG_VHOST 1 00:09:47.111 #define SPDK_CONFIG_VIRTIO 1 00:09:47.111 #undef SPDK_CONFIG_VTUNE 00:09:47.111 #define SPDK_CONFIG_VTUNE_DIR 00:09:47.111 #define SPDK_CONFIG_WERROR 1 00:09:47.111 #define SPDK_CONFIG_WPDK_DIR 00:09:47.111 #undef SPDK_CONFIG_XNVME 00:09:47.111 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:47.111 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:47.112 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1418327 ]] 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1418327 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.xZPYlw 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:47.113 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xZPYlw/tests/target /tmp/spdk.xZPYlw 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:47.113 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=89465049088 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552413696 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6087364608 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766175744 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776206848 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087474688 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110486016 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775846400 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776206848 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=360448 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:47.114 * Looking for test storage... 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=89465049088 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8301957120 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.114 --rc genhtml_branch_coverage=1 00:09:47.114 --rc genhtml_function_coverage=1 00:09:47.114 --rc genhtml_legend=1 00:09:47.114 --rc geninfo_all_blocks=1 00:09:47.114 --rc geninfo_unexecuted_blocks=1 00:09:47.114 00:09:47.114 ' 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.114 --rc genhtml_branch_coverage=1 00:09:47.114 --rc genhtml_function_coverage=1 00:09:47.114 --rc genhtml_legend=1 00:09:47.114 --rc geninfo_all_blocks=1 00:09:47.114 --rc geninfo_unexecuted_blocks=1 00:09:47.114 00:09:47.114 ' 00:09:47.114 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.114 --rc genhtml_branch_coverage=1 00:09:47.114 --rc genhtml_function_coverage=1 00:09:47.114 --rc genhtml_legend=1 00:09:47.114 --rc geninfo_all_blocks=1 00:09:47.114 --rc geninfo_unexecuted_blocks=1 00:09:47.114 00:09:47.114 ' 00:09:47.115 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.115 --rc genhtml_branch_coverage=1 00:09:47.115 --rc genhtml_function_coverage=1 00:09:47.115 --rc genhtml_legend=1 00:09:47.115 --rc geninfo_all_blocks=1 00:09:47.115 --rc geninfo_unexecuted_blocks=1 00:09:47.115 00:09:47.115 ' 00:09:47.115 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.115 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:47.115 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.115 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.115 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.375 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:53.943 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:53.943 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:53.943 Found net devices under 0000:af:00.0: cvl_0_0 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:53.943 Found net devices under 0000:af:00.1: cvl_0_1 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:53.943 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:53.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:09:53.943 00:09:53.943 --- 10.0.0.2 ping statistics --- 00:09:53.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.943 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:09:53.943 00:09:53.943 --- 10.0.0.1 ping statistics --- 00:09:53.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.943 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.943 ************************************ 00:09:53.943 START TEST nvmf_filesystem_no_in_capsule 00:09:53.943 ************************************ 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1421530 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1421530 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1421530 ']' 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.943 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.943 [2024-12-12 10:23:27.217888] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:09:53.943 [2024-12-12 10:23:27.217931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.943 [2024-12-12 10:23:27.296332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.943 [2024-12-12 10:23:27.338218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.943 [2024-12-12 10:23:27.338254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.944 [2024-12-12 10:23:27.338261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.944 [2024-12-12 10:23:27.338267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.944 [2024-12-12 10:23:27.338272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.944 [2024-12-12 10:23:27.339629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.944 [2024-12-12 10:23:27.339663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.944 [2024-12-12 10:23:27.339693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.944 [2024-12-12 10:23:27.339694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.944 [2024-12-12 10:23:27.476767] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.944 Malloc1 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.944 [2024-12-12 10:23:27.626351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:53.944 { 00:09:53.944 "name": "Malloc1", 00:09:53.944 "aliases": [ 00:09:53.944 "2ee0fbc7-cf3d-4b26-acf4-56da465cb502" 00:09:53.944 ], 00:09:53.944 "product_name": "Malloc disk", 00:09:53.944 "block_size": 512, 00:09:53.944 "num_blocks": 1048576, 00:09:53.944 "uuid": "2ee0fbc7-cf3d-4b26-acf4-56da465cb502", 00:09:53.944 "assigned_rate_limits": { 00:09:53.944 "rw_ios_per_sec": 0, 00:09:53.944 "rw_mbytes_per_sec": 0, 00:09:53.944 "r_mbytes_per_sec": 0, 00:09:53.944 "w_mbytes_per_sec": 0 00:09:53.944 }, 00:09:53.944 "claimed": true, 00:09:53.944 "claim_type": "exclusive_write", 00:09:53.944 "zoned": false, 00:09:53.944 "supported_io_types": { 00:09:53.944 "read": true, 00:09:53.944 "write": true, 00:09:53.944 "unmap": true, 00:09:53.944 "flush": true, 00:09:53.944 "reset": true, 00:09:53.944 "nvme_admin": false, 00:09:53.944 "nvme_io": false, 00:09:53.944 "nvme_io_md": false, 00:09:53.944 "write_zeroes": true, 00:09:53.944 "zcopy": true, 00:09:53.944 "get_zone_info": false, 00:09:53.944 "zone_management": false, 00:09:53.944 "zone_append": false, 00:09:53.944 "compare": false, 00:09:53.944 "compare_and_write": false, 00:09:53.944 "abort": true, 00:09:53.944 "seek_hole": false, 00:09:53.944 "seek_data": false, 00:09:53.944 "copy": true, 00:09:53.944 "nvme_iov_md": false 00:09:53.944 }, 00:09:53.944 "memory_domains": [ 00:09:53.944 { 00:09:53.944 "dma_device_id": "system", 00:09:53.944 "dma_device_type": 1 00:09:53.944 }, 00:09:53.944 { 00:09:53.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.944 "dma_device_type": 2 00:09:53.944 } 00:09:53.944 ], 00:09:53.944 "driver_specific": {} 00:09:53.944 } 00:09:53.944 ]' 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:53.944 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:54.878 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:54.878 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:54.878 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.878 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:54.878 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:57.406 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:57.406 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:57.664 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:58.597 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:58.597 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:58.597 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:58.597 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.597 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.855 ************************************ 00:09:58.855 START TEST filesystem_ext4 00:09:58.855 ************************************ 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:58.855 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:58.855 mke2fs 1.47.0 (5-Feb-2023) 00:09:58.855 Discarding device blocks: 0/522240 done 00:09:58.855 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:58.855 Filesystem UUID: ac090aaf-c6c1-4782-b481-339c8e0ce05f 00:09:58.855 Superblock backups stored on blocks: 00:09:58.855 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:58.855 00:09:58.855 Allocating group tables: 0/64 done 00:09:58.855 Writing inode tables: 0/64 done 00:09:59.114 Creating journal (8192 blocks): done 00:09:59.937 Writing superblocks and filesystem accounting information: 0/64 done 00:09:59.937 00:09:59.937 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:59.937 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1421530 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:06.498 00:10:06.498 real 0m7.029s 00:10:06.498 user 0m0.023s 00:10:06.498 sys 0m0.073s 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:06.498 ************************************ 00:10:06.498 END TEST filesystem_ext4 00:10:06.498 ************************************ 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.498 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.499 ************************************ 00:10:06.499 START TEST filesystem_btrfs 00:10:06.499 ************************************ 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:06.499 btrfs-progs v6.8.1 00:10:06.499 See https://btrfs.readthedocs.io for more information. 00:10:06.499 00:10:06.499 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:06.499 NOTE: several default settings have changed in version 5.15, please make sure 00:10:06.499 this does not affect your deployments: 00:10:06.499 - DUP for metadata (-m dup) 00:10:06.499 - enabled no-holes (-O no-holes) 00:10:06.499 - enabled free-space-tree (-R free-space-tree) 00:10:06.499 00:10:06.499 Label: (null) 00:10:06.499 UUID: d6eae779-2ddf-4f02-930a-47e5b8bdabb8 00:10:06.499 Node size: 16384 00:10:06.499 Sector size: 4096 (CPU page size: 4096) 00:10:06.499 Filesystem size: 510.00MiB 00:10:06.499 Block group profiles: 00:10:06.499 Data: single 8.00MiB 00:10:06.499 Metadata: DUP 32.00MiB 00:10:06.499 System: DUP 8.00MiB 00:10:06.499 SSD detected: yes 00:10:06.499 Zoned device: no 00:10:06.499 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:06.499 Checksum: crc32c 00:10:06.499 Number of devices: 1 00:10:06.499 Devices: 00:10:06.499 ID SIZE PATH 00:10:06.499 1 510.00MiB /dev/nvme0n1p1 00:10:06.499 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:06.499 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:06.499 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:06.499 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:06.757 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:06.757 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1421530 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:06.758 00:10:06.758 real 0m0.825s 00:10:06.758 user 0m0.020s 00:10:06.758 sys 0m0.122s 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:06.758 ************************************ 00:10:06.758 END TEST filesystem_btrfs 00:10:06.758 ************************************ 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.758 ************************************ 00:10:06.758 START TEST filesystem_xfs 00:10:06.758 ************************************ 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:06.758 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:06.758 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:06.758 = sectsz=512 attr=2, projid32bit=1 00:10:06.758 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:06.758 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:06.758 data = bsize=4096 blocks=130560, imaxpct=25 00:10:06.758 = sunit=0 swidth=0 blks 00:10:06.758 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:06.758 log =internal log bsize=4096 blocks=16384, version=2 00:10:06.758 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:06.758 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:08.131 Discarding blocks...Done. 00:10:08.131 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:08.132 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1421530 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:10.030 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:10.031 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:10.031 00:10:10.031 real 0m3.258s 00:10:10.031 user 0m0.020s 00:10:10.031 sys 0m0.078s 00:10:10.031 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.031 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:10.031 ************************************ 00:10:10.031 END TEST filesystem_xfs 00:10:10.031 ************************************ 00:10:10.031 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:10.289 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:10.289 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.289 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.289 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:10.289 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:10.289 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1421530 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1421530 ']' 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1421530 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1421530 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1421530' 00:10:10.546 killing process with pid 1421530 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1421530 00:10:10.546 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1421530 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:10.805 00:10:10.805 real 0m17.549s 00:10:10.805 user 1m9.074s 00:10:10.805 sys 0m1.396s 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.805 ************************************ 00:10:10.805 END TEST nvmf_filesystem_no_in_capsule 00:10:10.805 ************************************ 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.805 ************************************ 00:10:10.805 START TEST nvmf_filesystem_in_capsule 00:10:10.805 ************************************ 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1424588 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1424588 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1424588 ']' 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.805 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.063 [2024-12-12 10:23:44.845127] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:11.063 [2024-12-12 10:23:44.845174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.063 [2024-12-12 10:23:44.925151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.063 [2024-12-12 10:23:44.966619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.063 [2024-12-12 10:23:44.966656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.063 [2024-12-12 10:23:44.966663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.063 [2024-12-12 10:23:44.966669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.063 [2024-12-12 10:23:44.966674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.063 [2024-12-12 10:23:44.968016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.064 [2024-12-12 10:23:44.968123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.064 [2024-12-12 10:23:44.968224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.064 [2024-12-12 10:23:44.968225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.064 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.064 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:11.064 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.064 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.064 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.321 [2024-12-12 10:23:45.102386] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.321 Malloc1 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.321 [2024-12-12 10:23:45.279775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.321 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:11.321 { 00:10:11.321 "name": "Malloc1", 00:10:11.321 "aliases": [ 00:10:11.321 "49267095-f00d-4741-8b5f-809cd5c6df7a" 00:10:11.321 ], 00:10:11.321 "product_name": "Malloc disk", 00:10:11.321 "block_size": 512, 00:10:11.321 "num_blocks": 1048576, 00:10:11.321 "uuid": "49267095-f00d-4741-8b5f-809cd5c6df7a", 00:10:11.321 "assigned_rate_limits": { 00:10:11.321 "rw_ios_per_sec": 0, 00:10:11.321 "rw_mbytes_per_sec": 0, 00:10:11.321 "r_mbytes_per_sec": 0, 00:10:11.321 "w_mbytes_per_sec": 0 00:10:11.321 }, 00:10:11.321 "claimed": true, 00:10:11.321 "claim_type": "exclusive_write", 00:10:11.321 "zoned": false, 00:10:11.321 "supported_io_types": { 00:10:11.321 "read": true, 00:10:11.321 "write": true, 00:10:11.321 "unmap": true, 00:10:11.321 "flush": true, 00:10:11.321 "reset": true, 00:10:11.321 "nvme_admin": false, 00:10:11.321 "nvme_io": false, 00:10:11.321 "nvme_io_md": false, 00:10:11.321 "write_zeroes": true, 00:10:11.321 "zcopy": true, 00:10:11.322 "get_zone_info": false, 00:10:11.322 "zone_management": false, 00:10:11.322 "zone_append": false, 00:10:11.322 "compare": false, 00:10:11.322 "compare_and_write": false, 00:10:11.322 "abort": true, 00:10:11.322 "seek_hole": false, 00:10:11.322 "seek_data": false, 00:10:11.322 "copy": true, 00:10:11.322 "nvme_iov_md": false 00:10:11.322 }, 00:10:11.322 "memory_domains": [ 00:10:11.322 { 00:10:11.322 "dma_device_id": "system", 00:10:11.322 "dma_device_type": 1 00:10:11.322 }, 00:10:11.322 { 00:10:11.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.322 "dma_device_type": 2 00:10:11.322 } 00:10:11.322 ], 00:10:11.322 "driver_specific": {} 00:10:11.322 } 00:10:11.322 ]' 00:10:11.322 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:11.578 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:11.578 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:11.578 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:11.578 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:11.578 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:11.578 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:11.578 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.950 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.950 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:12.950 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.950 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:12.950 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:14.849 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:15.414 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:16.785 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.786 ************************************ 00:10:16.786 START TEST filesystem_in_capsule_ext4 00:10:16.786 ************************************ 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:16.786 mke2fs 1.47.0 (5-Feb-2023) 00:10:16.786 Discarding device blocks: 0/522240 done 00:10:16.786 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:16.786 Filesystem UUID: faf7f685-4bc4-49b9-8ef8-bff3c2140a99 00:10:16.786 Superblock backups stored on blocks: 00:10:16.786 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:16.786 00:10:16.786 Allocating group tables: 0/64 done 00:10:16.786 Writing inode tables: 0/64 done 00:10:16.786 Creating journal (8192 blocks): done 00:10:16.786 Writing superblocks and filesystem accounting information: 0/64 done 00:10:16.786 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:16.786 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:23.374 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:23.374 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:23.374 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:23.374 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1424588 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:23.375 00:10:23.375 real 0m6.281s 00:10:23.375 user 0m0.018s 00:10:23.375 sys 0m0.079s 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:23.375 ************************************ 00:10:23.375 END TEST filesystem_in_capsule_ext4 00:10:23.375 ************************************ 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.375 ************************************ 00:10:23.375 START TEST filesystem_in_capsule_btrfs 00:10:23.375 ************************************ 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:23.375 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:23.375 btrfs-progs v6.8.1 00:10:23.375 See https://btrfs.readthedocs.io for more information. 00:10:23.375 00:10:23.375 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:23.375 NOTE: several default settings have changed in version 5.15, please make sure 00:10:23.375 this does not affect your deployments: 00:10:23.375 - DUP for metadata (-m dup) 00:10:23.375 - enabled no-holes (-O no-holes) 00:10:23.375 - enabled free-space-tree (-R free-space-tree) 00:10:23.375 00:10:23.375 Label: (null) 00:10:23.375 UUID: feb4ce35-0ba4-49ba-9bbd-83708c9b977a 00:10:23.375 Node size: 16384 00:10:23.375 Sector size: 4096 (CPU page size: 4096) 00:10:23.375 Filesystem size: 510.00MiB 00:10:23.375 Block group profiles: 00:10:23.375 Data: single 8.00MiB 00:10:23.375 Metadata: DUP 32.00MiB 00:10:23.375 System: DUP 8.00MiB 00:10:23.375 SSD detected: yes 00:10:23.375 Zoned device: no 00:10:23.375 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:23.375 Checksum: crc32c 00:10:23.375 Number of devices: 1 00:10:23.375 Devices: 00:10:23.375 ID SIZE PATH 00:10:23.375 1 510.00MiB /dev/nvme0n1p1 00:10:23.375 00:10:23.375 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:23.375 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1424588 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:23.942 00:10:23.942 real 0m1.118s 00:10:23.942 user 0m0.027s 00:10:23.942 sys 0m0.116s 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:23.942 ************************************ 00:10:23.942 END TEST filesystem_in_capsule_btrfs 00:10:23.942 ************************************ 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.942 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.200 ************************************ 00:10:24.200 START TEST filesystem_in_capsule_xfs 00:10:24.200 ************************************ 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:24.200 10:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:24.200 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:24.200 = sectsz=512 attr=2, projid32bit=1 00:10:24.200 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:24.200 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:24.200 data = bsize=4096 blocks=130560, imaxpct=25 00:10:24.200 = sunit=0 swidth=0 blks 00:10:24.200 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:24.200 log =internal log bsize=4096 blocks=16384, version=2 00:10:24.200 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:24.200 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:25.133 Discarding blocks...Done. 00:10:25.133 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:25.133 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1424588 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.661 00:10:27.661 real 0m3.374s 00:10:27.661 user 0m0.024s 00:10:27.661 sys 0m0.078s 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:27.661 ************************************ 00:10:27.661 END TEST filesystem_in_capsule_xfs 00:10:27.661 ************************************ 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1424588 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1424588 ']' 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1424588 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1424588 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1424588' 00:10:27.661 killing process with pid 1424588 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1424588 00:10:27.661 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1424588 00:10:28.227 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:28.228 00:10:28.228 real 0m17.173s 00:10:28.228 user 1m7.511s 00:10:28.228 sys 0m1.434s 00:10:28.228 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.228 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.228 ************************************ 00:10:28.228 END TEST nvmf_filesystem_in_capsule 00:10:28.228 ************************************ 00:10:28.228 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:28.228 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.228 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:28.228 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.228 rmmod nvme_tcp 00:10:28.228 rmmod nvme_fabrics 00:10:28.228 rmmod nvme_keyring 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.228 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.131 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.131 00:10:30.131 real 0m43.432s 00:10:30.131 user 2m18.657s 00:10:30.131 sys 0m7.498s 00:10:30.131 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.131 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.131 ************************************ 00:10:30.131 END TEST nvmf_filesystem 00:10:30.131 ************************************ 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:30.390 ************************************ 00:10:30.390 START TEST nvmf_target_discovery 00:10:30.390 ************************************ 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:30.390 * Looking for test storage... 00:10:30.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:30.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.390 --rc genhtml_branch_coverage=1 00:10:30.390 --rc genhtml_function_coverage=1 00:10:30.390 --rc genhtml_legend=1 00:10:30.390 --rc geninfo_all_blocks=1 00:10:30.390 --rc geninfo_unexecuted_blocks=1 00:10:30.390 00:10:30.390 ' 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:30.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.390 --rc genhtml_branch_coverage=1 00:10:30.390 --rc genhtml_function_coverage=1 00:10:30.390 --rc genhtml_legend=1 00:10:30.390 --rc geninfo_all_blocks=1 00:10:30.390 --rc geninfo_unexecuted_blocks=1 00:10:30.390 00:10:30.390 ' 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:30.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.390 --rc genhtml_branch_coverage=1 00:10:30.390 --rc genhtml_function_coverage=1 00:10:30.390 --rc genhtml_legend=1 00:10:30.390 --rc geninfo_all_blocks=1 00:10:30.390 --rc geninfo_unexecuted_blocks=1 00:10:30.390 00:10:30.390 ' 00:10:30.390 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:30.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.390 --rc genhtml_branch_coverage=1 00:10:30.390 --rc genhtml_function_coverage=1 00:10:30.390 --rc genhtml_legend=1 00:10:30.390 --rc geninfo_all_blocks=1 00:10:30.391 --rc geninfo_unexecuted_blocks=1 00:10:30.391 00:10:30.391 ' 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.391 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.649 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.650 10:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:37.219 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:37.219 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:37.219 Found net devices under 0000:af:00.0: cvl_0_0 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:37.219 Found net devices under 0000:af:00.1: cvl_0_1 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.219 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:10:37.220 00:10:37.220 --- 10.0.0.2 ping statistics --- 00:10:37.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.220 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:10:37.220 00:10:37.220 --- 10.0.0.1 ping statistics --- 00:10:37.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.220 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1431462 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1431462 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1431462 ']' 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 [2024-12-12 10:24:10.417790] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:37.220 [2024-12-12 10:24:10.417838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.220 [2024-12-12 10:24:10.499694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.220 [2024-12-12 10:24:10.543528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.220 [2024-12-12 10:24:10.543566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.220 [2024-12-12 10:24:10.543577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.220 [2024-12-12 10:24:10.543583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.220 [2024-12-12 10:24:10.543589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.220 [2024-12-12 10:24:10.545042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.220 [2024-12-12 10:24:10.545152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.220 [2024-12-12 10:24:10.545256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.220 [2024-12-12 10:24:10.545257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 [2024-12-12 10:24:10.695231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 Null1 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 [2024-12-12 10:24:10.751742] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 Null2 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 Null3 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.220 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.221 Null4 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.221 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:37.221 00:10:37.221 Discovery Log Number of Records 6, Generation counter 6 00:10:37.221 =====Discovery Log Entry 0====== 00:10:37.221 trtype: tcp 00:10:37.221 adrfam: ipv4 00:10:37.221 subtype: current discovery subsystem 00:10:37.221 treq: not required 00:10:37.221 portid: 0 00:10:37.221 trsvcid: 4420 00:10:37.221 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:37.221 traddr: 10.0.0.2 00:10:37.221 eflags: explicit discovery connections, duplicate discovery information 00:10:37.221 sectype: none 00:10:37.221 =====Discovery Log Entry 1====== 00:10:37.221 trtype: tcp 00:10:37.221 adrfam: ipv4 00:10:37.221 subtype: nvme subsystem 00:10:37.221 treq: not required 00:10:37.221 portid: 0 00:10:37.221 trsvcid: 4420 00:10:37.221 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:37.221 traddr: 10.0.0.2 00:10:37.221 eflags: none 00:10:37.221 sectype: none 00:10:37.221 =====Discovery Log Entry 2====== 00:10:37.221 trtype: tcp 00:10:37.221 adrfam: ipv4 00:10:37.221 subtype: nvme subsystem 00:10:37.221 treq: not required 00:10:37.221 portid: 0 00:10:37.221 trsvcid: 4420 00:10:37.221 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:37.221 traddr: 10.0.0.2 00:10:37.221 eflags: none 00:10:37.221 sectype: none 00:10:37.221 =====Discovery Log Entry 3====== 00:10:37.221 trtype: tcp 00:10:37.221 adrfam: ipv4 00:10:37.221 subtype: nvme subsystem 00:10:37.221 treq: not required 00:10:37.221 portid: 0 00:10:37.221 trsvcid: 4420 00:10:37.221 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:37.221 traddr: 10.0.0.2 00:10:37.221 eflags: none 00:10:37.221 sectype: none 00:10:37.221 =====Discovery Log Entry 4====== 00:10:37.221 trtype: tcp 00:10:37.221 adrfam: ipv4 00:10:37.221 subtype: nvme subsystem 00:10:37.221 treq: not required 00:10:37.221 portid: 0 00:10:37.221 trsvcid: 4420 00:10:37.221 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:37.221 traddr: 10.0.0.2 00:10:37.221 eflags: none 00:10:37.221 sectype: none 00:10:37.221 =====Discovery Log Entry 5====== 00:10:37.221 trtype: tcp 00:10:37.221 adrfam: ipv4 00:10:37.221 subtype: discovery subsystem referral 00:10:37.221 treq: not required 00:10:37.221 portid: 0 00:10:37.221 trsvcid: 4430 00:10:37.221 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:37.221 traddr: 10.0.0.2 00:10:37.221 eflags: none 00:10:37.221 sectype: none 00:10:37.221 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:37.221 Perform nvmf subsystem discovery via RPC 00:10:37.221 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:37.221 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.221 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.221 [ 00:10:37.221 { 00:10:37.221 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:37.221 "subtype": "Discovery", 00:10:37.221 "listen_addresses": [ 00:10:37.221 { 00:10:37.221 "trtype": "TCP", 00:10:37.221 "adrfam": "IPv4", 00:10:37.221 "traddr": "10.0.0.2", 00:10:37.221 "trsvcid": "4420" 00:10:37.221 } 00:10:37.221 ], 00:10:37.221 "allow_any_host": true, 00:10:37.221 "hosts": [] 00:10:37.221 }, 00:10:37.221 { 00:10:37.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.221 "subtype": "NVMe", 00:10:37.221 "listen_addresses": [ 00:10:37.221 { 00:10:37.221 "trtype": "TCP", 00:10:37.221 "adrfam": "IPv4", 00:10:37.221 "traddr": "10.0.0.2", 00:10:37.221 "trsvcid": "4420" 00:10:37.221 } 00:10:37.221 ], 00:10:37.221 "allow_any_host": true, 00:10:37.221 "hosts": [], 00:10:37.221 "serial_number": "SPDK00000000000001", 00:10:37.221 "model_number": "SPDK bdev Controller", 00:10:37.221 "max_namespaces": 32, 00:10:37.221 "min_cntlid": 1, 00:10:37.221 "max_cntlid": 65519, 00:10:37.221 "namespaces": [ 00:10:37.221 { 00:10:37.221 "nsid": 1, 00:10:37.221 "bdev_name": "Null1", 00:10:37.221 "name": "Null1", 00:10:37.221 "nguid": "C164B13D2714431AA61693B6C540D69A", 00:10:37.221 "uuid": "c164b13d-2714-431a-a616-93b6c540d69a" 00:10:37.221 } 00:10:37.221 ] 00:10:37.221 }, 00:10:37.221 { 00:10:37.221 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:37.221 "subtype": "NVMe", 00:10:37.221 "listen_addresses": [ 00:10:37.221 { 00:10:37.221 "trtype": "TCP", 00:10:37.221 "adrfam": "IPv4", 00:10:37.221 "traddr": "10.0.0.2", 00:10:37.221 "trsvcid": "4420" 00:10:37.221 } 00:10:37.221 ], 00:10:37.221 "allow_any_host": true, 00:10:37.221 "hosts": [], 00:10:37.221 "serial_number": "SPDK00000000000002", 00:10:37.221 "model_number": "SPDK bdev Controller", 00:10:37.221 "max_namespaces": 32, 00:10:37.221 "min_cntlid": 1, 00:10:37.221 "max_cntlid": 65519, 00:10:37.221 "namespaces": [ 00:10:37.221 { 00:10:37.221 "nsid": 1, 00:10:37.221 "bdev_name": "Null2", 00:10:37.221 "name": "Null2", 00:10:37.221 "nguid": "9938234967C14606B65B6DD2B10B320E", 00:10:37.221 "uuid": "99382349-67c1-4606-b65b-6dd2b10b320e" 00:10:37.221 } 00:10:37.221 ] 00:10:37.221 }, 00:10:37.221 { 00:10:37.221 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:37.221 "subtype": "NVMe", 00:10:37.221 "listen_addresses": [ 00:10:37.221 { 00:10:37.221 "trtype": "TCP", 00:10:37.221 "adrfam": "IPv4", 00:10:37.221 "traddr": "10.0.0.2", 00:10:37.221 "trsvcid": "4420" 00:10:37.221 } 00:10:37.221 ], 00:10:37.221 "allow_any_host": true, 00:10:37.221 "hosts": [], 00:10:37.221 "serial_number": "SPDK00000000000003", 00:10:37.221 "model_number": "SPDK bdev Controller", 00:10:37.221 "max_namespaces": 32, 00:10:37.221 "min_cntlid": 1, 00:10:37.221 "max_cntlid": 65519, 00:10:37.221 "namespaces": [ 00:10:37.221 { 00:10:37.221 "nsid": 1, 00:10:37.221 "bdev_name": "Null3", 00:10:37.221 "name": "Null3", 00:10:37.221 "nguid": "FD99BC89AB884AE98C6EBD0008FC4875", 00:10:37.221 "uuid": "fd99bc89-ab88-4ae9-8c6e-bd0008fc4875" 00:10:37.221 } 00:10:37.221 ] 00:10:37.221 }, 00:10:37.221 { 00:10:37.221 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:37.221 "subtype": "NVMe", 00:10:37.221 "listen_addresses": [ 00:10:37.221 { 00:10:37.221 "trtype": "TCP", 00:10:37.221 "adrfam": "IPv4", 00:10:37.221 "traddr": "10.0.0.2", 00:10:37.221 "trsvcid": "4420" 00:10:37.222 } 00:10:37.222 ], 00:10:37.222 "allow_any_host": true, 00:10:37.222 "hosts": [], 00:10:37.222 "serial_number": "SPDK00000000000004", 00:10:37.222 "model_number": "SPDK bdev Controller", 00:10:37.222 "max_namespaces": 32, 00:10:37.222 "min_cntlid": 1, 00:10:37.222 "max_cntlid": 65519, 00:10:37.222 "namespaces": [ 00:10:37.222 { 00:10:37.222 "nsid": 1, 00:10:37.222 "bdev_name": "Null4", 00:10:37.222 "name": "Null4", 00:10:37.222 "nguid": "E883A66C6C654F27A148785DBE6EB605", 00:10:37.222 "uuid": "e883a66c-6c65-4f27-a148-785dbe6eb605" 00:10:37.222 } 00:10:37.222 ] 00:10:37.222 } 00:10:37.222 ] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.222 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.222 rmmod nvme_tcp 00:10:37.222 rmmod nvme_fabrics 00:10:37.222 rmmod nvme_keyring 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1431462 ']' 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1431462 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1431462 ']' 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1431462 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431462 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431462' 00:10:37.517 killing process with pid 1431462 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1431462 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1431462 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.517 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.105 00:10:40.105 real 0m9.315s 00:10:40.105 user 0m5.532s 00:10:40.105 sys 0m4.803s 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.105 ************************************ 00:10:40.105 END TEST nvmf_target_discovery 00:10:40.105 ************************************ 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:40.105 ************************************ 00:10:40.105 START TEST nvmf_referrals 00:10:40.105 ************************************ 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:40.105 * Looking for test storage... 00:10:40.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:40.105 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.106 --rc genhtml_branch_coverage=1 00:10:40.106 --rc genhtml_function_coverage=1 00:10:40.106 --rc genhtml_legend=1 00:10:40.106 --rc geninfo_all_blocks=1 00:10:40.106 --rc geninfo_unexecuted_blocks=1 00:10:40.106 00:10:40.106 ' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.106 --rc genhtml_branch_coverage=1 00:10:40.106 --rc genhtml_function_coverage=1 00:10:40.106 --rc genhtml_legend=1 00:10:40.106 --rc geninfo_all_blocks=1 00:10:40.106 --rc geninfo_unexecuted_blocks=1 00:10:40.106 00:10:40.106 ' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.106 --rc genhtml_branch_coverage=1 00:10:40.106 --rc genhtml_function_coverage=1 00:10:40.106 --rc genhtml_legend=1 00:10:40.106 --rc geninfo_all_blocks=1 00:10:40.106 --rc geninfo_unexecuted_blocks=1 00:10:40.106 00:10:40.106 ' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.106 --rc genhtml_branch_coverage=1 00:10:40.106 --rc genhtml_function_coverage=1 00:10:40.106 --rc genhtml_legend=1 00:10:40.106 --rc geninfo_all_blocks=1 00:10:40.106 --rc geninfo_unexecuted_blocks=1 00:10:40.106 00:10:40.106 ' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.106 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:46.670 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:46.670 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:46.670 Found net devices under 0000:af:00.0: cvl_0_0 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:46.670 Found net devices under 0000:af:00.1: cvl_0_1 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.670 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:10:46.671 00:10:46.671 --- 10.0.0.2 ping statistics --- 00:10:46.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.671 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:10:46.671 00:10:46.671 --- 10.0.0.1 ping statistics --- 00:10:46.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.671 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1435276 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1435276 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1435276 ']' 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.671 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 [2024-12-12 10:24:19.879591] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:46.671 [2024-12-12 10:24:19.879637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.671 [2024-12-12 10:24:19.956332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.671 [2024-12-12 10:24:19.996083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.671 [2024-12-12 10:24:19.996125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.671 [2024-12-12 10:24:19.996132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.671 [2024-12-12 10:24:19.996138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.671 [2024-12-12 10:24:19.996143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.671 [2024-12-12 10:24:19.997613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.671 [2024-12-12 10:24:19.997707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.671 [2024-12-12 10:24:19.997819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.671 [2024-12-12 10:24:19.997820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 [2024-12-12 10:24:20.144233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 [2024-12-12 10:24:20.172739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.671 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:46.672 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:46.930 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:47.187 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:47.187 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:47.187 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:47.187 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:47.187 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:47.187 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.187 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:47.187 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:47.187 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:47.187 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:47.187 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:47.187 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.187 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:47.445 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:47.702 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:47.702 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:47.702 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:47.702 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:47.702 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:47.702 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.702 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.959 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.217 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:48.217 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:48.217 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:48.217 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:48.217 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.217 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:48.217 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.217 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.217 rmmod nvme_tcp 00:10:48.217 rmmod nvme_fabrics 00:10:48.476 rmmod nvme_keyring 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1435276 ']' 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1435276 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1435276 ']' 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1435276 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1435276 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1435276' 00:10:48.476 killing process with pid 1435276 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1435276 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1435276 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:48.476 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.735 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.735 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.638 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.638 00:10:50.638 real 0m10.960s 00:10:50.638 user 0m12.542s 00:10:50.638 sys 0m5.115s 00:10:50.638 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.638 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.638 ************************************ 00:10:50.638 END TEST nvmf_referrals 00:10:50.638 ************************************ 00:10:50.638 10:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:50.638 10:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.638 10:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.638 10:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:50.638 ************************************ 00:10:50.638 START TEST nvmf_connect_disconnect 00:10:50.638 ************************************ 00:10:50.638 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:50.898 * Looking for test storage... 00:10:50.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:50.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.898 --rc genhtml_branch_coverage=1 00:10:50.898 --rc genhtml_function_coverage=1 00:10:50.898 --rc genhtml_legend=1 00:10:50.898 --rc geninfo_all_blocks=1 00:10:50.898 --rc geninfo_unexecuted_blocks=1 00:10:50.898 00:10:50.898 ' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:50.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.898 --rc genhtml_branch_coverage=1 00:10:50.898 --rc genhtml_function_coverage=1 00:10:50.898 --rc genhtml_legend=1 00:10:50.898 --rc geninfo_all_blocks=1 00:10:50.898 --rc geninfo_unexecuted_blocks=1 00:10:50.898 00:10:50.898 ' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:50.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.898 --rc genhtml_branch_coverage=1 00:10:50.898 --rc genhtml_function_coverage=1 00:10:50.898 --rc genhtml_legend=1 00:10:50.898 --rc geninfo_all_blocks=1 00:10:50.898 --rc geninfo_unexecuted_blocks=1 00:10:50.898 00:10:50.898 ' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:50.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.898 --rc genhtml_branch_coverage=1 00:10:50.898 --rc genhtml_function_coverage=1 00:10:50.898 --rc genhtml_legend=1 00:10:50.898 --rc geninfo_all_blocks=1 00:10:50.898 --rc geninfo_unexecuted_blocks=1 00:10:50.898 00:10:50.898 ' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.898 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:57.476 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:57.476 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:57.476 Found net devices under 0000:af:00.0: cvl_0_0 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:57.476 Found net devices under 0000:af:00.1: cvl_0_1 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.476 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:10:57.477 00:10:57.477 --- 10.0.0.2 ping statistics --- 00:10:57.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.477 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:10:57.477 00:10:57.477 --- 10.0.0.1 ping statistics --- 00:10:57.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.477 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1439283 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1439283 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1439283 ']' 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.477 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.477 [2024-12-12 10:24:30.959530] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:10:57.477 [2024-12-12 10:24:30.959584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.477 [2024-12-12 10:24:31.037877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.477 [2024-12-12 10:24:31.080344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.477 [2024-12-12 10:24:31.080382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.477 [2024-12-12 10:24:31.080389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.477 [2024-12-12 10:24:31.080395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.477 [2024-12-12 10:24:31.080400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.477 [2024-12-12 10:24:31.081879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.477 [2024-12-12 10:24:31.081986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.477 [2024-12-12 10:24:31.082094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.477 [2024-12-12 10:24:31.082095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.477 [2024-12-12 10:24:31.219936] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.477 [2024-12-12 10:24:31.282064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:57.477 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:00.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.871 rmmod nvme_tcp 00:11:13.871 rmmod nvme_fabrics 00:11:13.871 rmmod nvme_keyring 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1439283 ']' 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1439283 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1439283 ']' 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1439283 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:13.871 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1439283 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1439283' 00:11:13.872 killing process with pid 1439283 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1439283 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1439283 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.872 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.405 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.405 00:11:16.405 real 0m25.254s 00:11:16.405 user 1m8.193s 00:11:16.405 sys 0m5.814s 00:11:16.405 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.405 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.405 ************************************ 00:11:16.405 END TEST nvmf_connect_disconnect 00:11:16.405 ************************************ 00:11:16.405 10:24:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:16.405 10:24:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.405 10:24:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.405 10:24:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.405 ************************************ 00:11:16.405 START TEST nvmf_multitarget 00:11:16.405 ************************************ 00:11:16.405 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:16.405 * Looking for test storage... 00:11:16.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.405 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:16.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.405 --rc genhtml_branch_coverage=1 00:11:16.405 --rc genhtml_function_coverage=1 00:11:16.405 --rc genhtml_legend=1 00:11:16.405 --rc geninfo_all_blocks=1 00:11:16.405 --rc geninfo_unexecuted_blocks=1 00:11:16.405 00:11:16.405 ' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:16.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.406 --rc genhtml_branch_coverage=1 00:11:16.406 --rc genhtml_function_coverage=1 00:11:16.406 --rc genhtml_legend=1 00:11:16.406 --rc geninfo_all_blocks=1 00:11:16.406 --rc geninfo_unexecuted_blocks=1 00:11:16.406 00:11:16.406 ' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:16.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.406 --rc genhtml_branch_coverage=1 00:11:16.406 --rc genhtml_function_coverage=1 00:11:16.406 --rc genhtml_legend=1 00:11:16.406 --rc geninfo_all_blocks=1 00:11:16.406 --rc geninfo_unexecuted_blocks=1 00:11:16.406 00:11:16.406 ' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:16.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.406 --rc genhtml_branch_coverage=1 00:11:16.406 --rc genhtml_function_coverage=1 00:11:16.406 --rc genhtml_legend=1 00:11:16.406 --rc geninfo_all_blocks=1 00:11:16.406 --rc geninfo_unexecuted_blocks=1 00:11:16.406 00:11:16.406 ' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:16.406 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:22.973 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.973 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:22.974 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:22.974 Found net devices under 0000:af:00.0: cvl_0_0 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:22.974 Found net devices under 0000:af:00.1: cvl_0_1 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:22.974 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:22.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:11:22.974 00:11:22.974 --- 10.0.0.2 ping statistics --- 00:11:22.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.974 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:11:22.974 00:11:22.974 --- 10.0.0.1 ping statistics --- 00:11:22.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.974 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1445542 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1445542 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1445542 ']' 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.974 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 [2024-12-12 10:24:56.165105] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:22.974 [2024-12-12 10:24:56.165147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.974 [2024-12-12 10:24:56.244871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.974 [2024-12-12 10:24:56.284469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.974 [2024-12-12 10:24:56.284509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.974 [2024-12-12 10:24:56.284515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.974 [2024-12-12 10:24:56.284522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.974 [2024-12-12 10:24:56.284526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.974 [2024-12-12 10:24:56.285976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.974 [2024-12-12 10:24:56.286086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.974 [2024-12-12 10:24:56.286170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.974 [2024-12-12 10:24:56.286171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:23.233 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:23.233 "nvmf_tgt_1" 00:11:23.491 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:23.491 "nvmf_tgt_2" 00:11:23.491 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:23.491 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:23.491 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:23.491 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:23.749 true 00:11:23.749 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:23.749 true 00:11:23.749 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:23.749 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.008 rmmod nvme_tcp 00:11:24.008 rmmod nvme_fabrics 00:11:24.008 rmmod nvme_keyring 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1445542 ']' 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1445542 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1445542 ']' 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1445542 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1445542 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1445542' 00:11:24.008 killing process with pid 1445542 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1445542 00:11:24.008 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1445542 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.267 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.172 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.172 00:11:26.172 real 0m10.177s 00:11:26.172 user 0m9.787s 00:11:26.172 sys 0m4.917s 00:11:26.172 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.172 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.172 ************************************ 00:11:26.172 END TEST nvmf_multitarget 00:11:26.172 ************************************ 00:11:26.172 10:25:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:26.172 10:25:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.172 10:25:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.172 10:25:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.431 ************************************ 00:11:26.431 START TEST nvmf_rpc 00:11:26.431 ************************************ 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:26.431 * Looking for test storage... 00:11:26.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.431 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:26.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.432 --rc genhtml_branch_coverage=1 00:11:26.432 --rc genhtml_function_coverage=1 00:11:26.432 --rc genhtml_legend=1 00:11:26.432 --rc geninfo_all_blocks=1 00:11:26.432 --rc geninfo_unexecuted_blocks=1 00:11:26.432 00:11:26.432 ' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:26.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.432 --rc genhtml_branch_coverage=1 00:11:26.432 --rc genhtml_function_coverage=1 00:11:26.432 --rc genhtml_legend=1 00:11:26.432 --rc geninfo_all_blocks=1 00:11:26.432 --rc geninfo_unexecuted_blocks=1 00:11:26.432 00:11:26.432 ' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:26.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.432 --rc genhtml_branch_coverage=1 00:11:26.432 --rc genhtml_function_coverage=1 00:11:26.432 --rc genhtml_legend=1 00:11:26.432 --rc geninfo_all_blocks=1 00:11:26.432 --rc geninfo_unexecuted_blocks=1 00:11:26.432 00:11:26.432 ' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:26.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.432 --rc genhtml_branch_coverage=1 00:11:26.432 --rc genhtml_function_coverage=1 00:11:26.432 --rc genhtml_legend=1 00:11:26.432 --rc geninfo_all_blocks=1 00:11:26.432 --rc geninfo_unexecuted_blocks=1 00:11:26.432 00:11:26.432 ' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.432 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:33.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:33.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:33.001 Found net devices under 0000:af:00.0: cvl_0_0 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:33.001 Found net devices under 0000:af:00.1: cvl_0_1 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.001 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:11:33.002 00:11:33.002 --- 10.0.0.2 ping statistics --- 00:11:33.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.002 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:33.002 00:11:33.002 --- 10.0.0.1 ping statistics --- 00:11:33.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.002 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1449263 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1449263 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1449263 ']' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.002 [2024-12-12 10:25:06.364414] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:11:33.002 [2024-12-12 10:25:06.364457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.002 [2024-12-12 10:25:06.439166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.002 [2024-12-12 10:25:06.478610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.002 [2024-12-12 10:25:06.478650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.002 [2024-12-12 10:25:06.478657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.002 [2024-12-12 10:25:06.478663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.002 [2024-12-12 10:25:06.478668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.002 [2024-12-12 10:25:06.480152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.002 [2024-12-12 10:25:06.480261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.002 [2024-12-12 10:25:06.480342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.002 [2024-12-12 10:25:06.480343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:33.002 "tick_rate": 2100000000, 00:11:33.002 "poll_groups": [ 00:11:33.002 { 00:11:33.002 "name": "nvmf_tgt_poll_group_000", 00:11:33.002 "admin_qpairs": 0, 00:11:33.002 "io_qpairs": 0, 00:11:33.002 "current_admin_qpairs": 0, 00:11:33.002 "current_io_qpairs": 0, 00:11:33.002 "pending_bdev_io": 0, 00:11:33.002 "completed_nvme_io": 0, 00:11:33.002 "transports": [] 00:11:33.002 }, 00:11:33.002 { 00:11:33.002 "name": "nvmf_tgt_poll_group_001", 00:11:33.002 "admin_qpairs": 0, 00:11:33.002 "io_qpairs": 0, 00:11:33.002 "current_admin_qpairs": 0, 00:11:33.002 "current_io_qpairs": 0, 00:11:33.002 "pending_bdev_io": 0, 00:11:33.002 "completed_nvme_io": 0, 00:11:33.002 "transports": [] 00:11:33.002 }, 00:11:33.002 { 00:11:33.002 "name": "nvmf_tgt_poll_group_002", 00:11:33.002 "admin_qpairs": 0, 00:11:33.002 "io_qpairs": 0, 00:11:33.002 "current_admin_qpairs": 0, 00:11:33.002 "current_io_qpairs": 0, 00:11:33.002 "pending_bdev_io": 0, 00:11:33.002 "completed_nvme_io": 0, 00:11:33.002 "transports": [] 00:11:33.002 }, 00:11:33.002 { 00:11:33.002 "name": "nvmf_tgt_poll_group_003", 00:11:33.002 "admin_qpairs": 0, 00:11:33.002 "io_qpairs": 0, 00:11:33.002 "current_admin_qpairs": 0, 00:11:33.002 "current_io_qpairs": 0, 00:11:33.002 "pending_bdev_io": 0, 00:11:33.002 "completed_nvme_io": 0, 00:11:33.002 "transports": [] 00:11:33.002 } 00:11:33.002 ] 00:11:33.002 }' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.002 [2024-12-12 10:25:06.738340] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.002 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:33.002 "tick_rate": 2100000000, 00:11:33.002 "poll_groups": [ 00:11:33.002 { 00:11:33.002 "name": "nvmf_tgt_poll_group_000", 00:11:33.002 "admin_qpairs": 0, 00:11:33.002 "io_qpairs": 0, 00:11:33.002 "current_admin_qpairs": 0, 00:11:33.002 "current_io_qpairs": 0, 00:11:33.002 "pending_bdev_io": 0, 00:11:33.002 "completed_nvme_io": 0, 00:11:33.002 "transports": [ 00:11:33.002 { 00:11:33.002 "trtype": "TCP" 00:11:33.002 } 00:11:33.002 ] 00:11:33.002 }, 00:11:33.002 { 00:11:33.002 "name": "nvmf_tgt_poll_group_001", 00:11:33.002 "admin_qpairs": 0, 00:11:33.002 "io_qpairs": 0, 00:11:33.002 "current_admin_qpairs": 0, 00:11:33.003 "current_io_qpairs": 0, 00:11:33.003 "pending_bdev_io": 0, 00:11:33.003 "completed_nvme_io": 0, 00:11:33.003 "transports": [ 00:11:33.003 { 00:11:33.003 "trtype": "TCP" 00:11:33.003 } 00:11:33.003 ] 00:11:33.003 }, 00:11:33.003 { 00:11:33.003 "name": "nvmf_tgt_poll_group_002", 00:11:33.003 "admin_qpairs": 0, 00:11:33.003 "io_qpairs": 0, 00:11:33.003 "current_admin_qpairs": 0, 00:11:33.003 "current_io_qpairs": 0, 00:11:33.003 "pending_bdev_io": 0, 00:11:33.003 "completed_nvme_io": 0, 00:11:33.003 "transports": [ 00:11:33.003 { 00:11:33.003 "trtype": "TCP" 00:11:33.003 } 00:11:33.003 ] 00:11:33.003 }, 00:11:33.003 { 00:11:33.003 "name": "nvmf_tgt_poll_group_003", 00:11:33.003 "admin_qpairs": 0, 00:11:33.003 "io_qpairs": 0, 00:11:33.003 "current_admin_qpairs": 0, 00:11:33.003 "current_io_qpairs": 0, 00:11:33.003 "pending_bdev_io": 0, 00:11:33.003 "completed_nvme_io": 0, 00:11:33.003 "transports": [ 00:11:33.003 { 00:11:33.003 "trtype": "TCP" 00:11:33.003 } 00:11:33.003 ] 00:11:33.003 } 00:11:33.003 ] 00:11:33.003 }' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.003 Malloc1 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.003 [2024-12-12 10:25:06.922100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:33.003 [2024-12-12 10:25:06.950620] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:11:33.003 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:33.003 could not add new controller: failed to write to nvme-fabrics device 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.003 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.378 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.379 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.379 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.379 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.379 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.278 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.278 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.278 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.279 [2024-12-12 10:25:10.266092] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:11:36.279 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:36.279 could not add new controller: failed to write to nvme-fabrics device 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.279 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.653 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.653 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.653 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.653 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.653 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.553 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.812 [2024-12-12 10:25:13.595201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.812 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.745 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.745 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:40.745 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.745 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:40.745 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.273 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.274 [2024-12-12 10:25:16.849164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.274 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.207 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.207 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.207 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.207 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.207 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.177 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.177 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.177 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.177 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.177 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.177 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.177 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.177 [2024-12-12 10:25:20.106008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.177 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.550 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.551 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.551 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.551 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.551 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.449 [2024-12-12 10:25:23.360322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.449 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.822 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.822 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.822 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.822 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.822 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.729 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.729 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.729 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.729 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.730 [2024-12-12 10:25:26.715998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.730 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.105 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.105 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.105 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.105 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.105 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.005 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.005 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.005 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:56.005 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.005 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.006 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.006 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.006 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.006 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.006 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.006 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.006 [2024-12-12 10:25:30.026940] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 [2024-12-12 10:25:30.079030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 [2024-12-12 10:25:30.127174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.265 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 [2024-12-12 10:25:30.175325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 [2024-12-12 10:25:30.227507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:56.266 "tick_rate": 2100000000, 00:11:56.266 "poll_groups": [ 00:11:56.266 { 00:11:56.266 "name": "nvmf_tgt_poll_group_000", 00:11:56.266 "admin_qpairs": 2, 00:11:56.266 "io_qpairs": 168, 00:11:56.266 "current_admin_qpairs": 0, 00:11:56.266 "current_io_qpairs": 0, 00:11:56.266 "pending_bdev_io": 0, 00:11:56.266 "completed_nvme_io": 268, 00:11:56.266 "transports": [ 00:11:56.266 { 00:11:56.266 "trtype": "TCP" 00:11:56.266 } 00:11:56.266 ] 00:11:56.266 }, 00:11:56.266 { 00:11:56.266 "name": "nvmf_tgt_poll_group_001", 00:11:56.266 "admin_qpairs": 2, 00:11:56.266 "io_qpairs": 168, 00:11:56.266 "current_admin_qpairs": 0, 00:11:56.266 "current_io_qpairs": 0, 00:11:56.266 "pending_bdev_io": 0, 00:11:56.266 "completed_nvme_io": 269, 00:11:56.266 "transports": [ 00:11:56.266 { 00:11:56.266 "trtype": "TCP" 00:11:56.266 } 00:11:56.266 ] 00:11:56.266 }, 00:11:56.266 { 00:11:56.266 "name": "nvmf_tgt_poll_group_002", 00:11:56.266 "admin_qpairs": 1, 00:11:56.266 "io_qpairs": 168, 00:11:56.266 "current_admin_qpairs": 0, 00:11:56.266 "current_io_qpairs": 0, 00:11:56.266 "pending_bdev_io": 0, 00:11:56.266 "completed_nvme_io": 267, 00:11:56.266 "transports": [ 00:11:56.266 { 00:11:56.266 "trtype": "TCP" 00:11:56.266 } 00:11:56.266 ] 00:11:56.266 }, 00:11:56.266 { 00:11:56.266 "name": "nvmf_tgt_poll_group_003", 00:11:56.266 "admin_qpairs": 2, 00:11:56.266 "io_qpairs": 168, 00:11:56.266 "current_admin_qpairs": 0, 00:11:56.266 "current_io_qpairs": 0, 00:11:56.266 "pending_bdev_io": 0, 00:11:56.266 "completed_nvme_io": 218, 00:11:56.266 "transports": [ 00:11:56.266 { 00:11:56.266 "trtype": "TCP" 00:11:56.266 } 00:11:56.266 ] 00:11:56.266 } 00:11:56.266 ] 00:11:56.266 }' 00:11:56.266 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.525 rmmod nvme_tcp 00:11:56.525 rmmod nvme_fabrics 00:11:56.525 rmmod nvme_keyring 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1449263 ']' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1449263 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1449263 ']' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1449263 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449263 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449263' 00:11:56.525 killing process with pid 1449263 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1449263 00:11:56.525 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1449263 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.784 10:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.321 00:11:59.321 real 0m32.541s 00:11:59.321 user 1m38.164s 00:11:59.321 sys 0m6.392s 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.321 ************************************ 00:11:59.321 END TEST nvmf_rpc 00:11:59.321 ************************************ 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.321 ************************************ 00:11:59.321 START TEST nvmf_invalid 00:11:59.321 ************************************ 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:59.321 * Looking for test storage... 00:11:59.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:59.321 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:59.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.321 --rc genhtml_branch_coverage=1 00:11:59.321 --rc genhtml_function_coverage=1 00:11:59.321 --rc genhtml_legend=1 00:11:59.321 --rc geninfo_all_blocks=1 00:11:59.321 --rc geninfo_unexecuted_blocks=1 00:11:59.321 00:11:59.321 ' 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:59.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.321 --rc genhtml_branch_coverage=1 00:11:59.321 --rc genhtml_function_coverage=1 00:11:59.321 --rc genhtml_legend=1 00:11:59.321 --rc geninfo_all_blocks=1 00:11:59.321 --rc geninfo_unexecuted_blocks=1 00:11:59.321 00:11:59.321 ' 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:59.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.321 --rc genhtml_branch_coverage=1 00:11:59.321 --rc genhtml_function_coverage=1 00:11:59.321 --rc genhtml_legend=1 00:11:59.321 --rc geninfo_all_blocks=1 00:11:59.321 --rc geninfo_unexecuted_blocks=1 00:11:59.321 00:11:59.321 ' 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:59.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.321 --rc genhtml_branch_coverage=1 00:11:59.321 --rc genhtml_function_coverage=1 00:11:59.321 --rc genhtml_legend=1 00:11:59.321 --rc geninfo_all_blocks=1 00:11:59.321 --rc geninfo_unexecuted_blocks=1 00:11:59.321 00:11:59.321 ' 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.321 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.322 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:05.893 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:05.893 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:05.893 Found net devices under 0000:af:00.0: cvl_0_0 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:05.893 Found net devices under 0000:af:00.1: cvl_0_1 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.893 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:12:05.894 00:12:05.894 --- 10.0.0.2 ping statistics --- 00:12:05.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.894 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:12:05.894 00:12:05.894 --- 10.0.0.1 ping statistics --- 00:12:05.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.894 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1456917 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1456917 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1456917 ']' 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.894 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:05.894 [2024-12-12 10:25:39.010978] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:05.894 [2024-12-12 10:25:39.011022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.894 [2024-12-12 10:25:39.087614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.894 [2024-12-12 10:25:39.129067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.894 [2024-12-12 10:25:39.129103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.894 [2024-12-12 10:25:39.129110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.894 [2024-12-12 10:25:39.129116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.894 [2024-12-12 10:25:39.129121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.894 [2024-12-12 10:25:39.133587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.894 [2024-12-12 10:25:39.133612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.894 [2024-12-12 10:25:39.133718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.894 [2024-12-12 10:25:39.133719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7096 00:12:05.894 [2024-12-12 10:25:39.444574] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:05.894 { 00:12:05.894 "nqn": "nqn.2016-06.io.spdk:cnode7096", 00:12:05.894 "tgt_name": "foobar", 00:12:05.894 "method": "nvmf_create_subsystem", 00:12:05.894 "req_id": 1 00:12:05.894 } 00:12:05.894 Got JSON-RPC error response 00:12:05.894 response: 00:12:05.894 { 00:12:05.894 "code": -32603, 00:12:05.894 "message": "Unable to find target foobar" 00:12:05.894 }' 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:05.894 { 00:12:05.894 "nqn": "nqn.2016-06.io.spdk:cnode7096", 00:12:05.894 "tgt_name": "foobar", 00:12:05.894 "method": "nvmf_create_subsystem", 00:12:05.894 "req_id": 1 00:12:05.894 } 00:12:05.894 Got JSON-RPC error response 00:12:05.894 response: 00:12:05.894 { 00:12:05.894 "code": -32603, 00:12:05.894 "message": "Unable to find target foobar" 00:12:05.894 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8759 00:12:05.894 [2024-12-12 10:25:39.645234] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8759: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:05.894 { 00:12:05.894 "nqn": "nqn.2016-06.io.spdk:cnode8759", 00:12:05.894 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:05.894 "method": "nvmf_create_subsystem", 00:12:05.894 "req_id": 1 00:12:05.894 } 00:12:05.894 Got JSON-RPC error response 00:12:05.894 response: 00:12:05.894 { 00:12:05.894 "code": -32602, 00:12:05.894 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:05.894 }' 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:05.894 { 00:12:05.894 "nqn": "nqn.2016-06.io.spdk:cnode8759", 00:12:05.894 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:05.894 "method": "nvmf_create_subsystem", 00:12:05.894 "req_id": 1 00:12:05.894 } 00:12:05.894 Got JSON-RPC error response 00:12:05.894 response: 00:12:05.894 { 00:12:05.894 "code": -32602, 00:12:05.894 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:05.894 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28072 00:12:05.894 [2024-12-12 10:25:39.845912] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28072: invalid model number 'SPDK_Controller' 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:05.894 { 00:12:05.894 "nqn": "nqn.2016-06.io.spdk:cnode28072", 00:12:05.894 "model_number": "SPDK_Controller\u001f", 00:12:05.894 "method": "nvmf_create_subsystem", 00:12:05.894 "req_id": 1 00:12:05.894 } 00:12:05.894 Got JSON-RPC error response 00:12:05.894 response: 00:12:05.894 { 00:12:05.894 "code": -32602, 00:12:05.894 "message": "Invalid MN SPDK_Controller\u001f" 00:12:05.894 }' 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:05.894 { 00:12:05.894 "nqn": "nqn.2016-06.io.spdk:cnode28072", 00:12:05.894 "model_number": "SPDK_Controller\u001f", 00:12:05.894 "method": "nvmf_create_subsystem", 00:12:05.894 "req_id": 1 00:12:05.894 } 00:12:05.894 Got JSON-RPC error response 00:12:05.894 response: 00:12:05.894 { 00:12:05.894 "code": -32602, 00:12:05.894 "message": "Invalid MN SPDK_Controller\u001f" 00:12:05.894 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:05.894 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.895 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:06.153 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:06.154 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Rlx(bn5hQ:U4=8R;oQ`0V' 00:12:06.154 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Rlx(bn5hQ:U4=8R;oQ`0V' nqn.2016-06.io.spdk:cnode13852 00:12:06.413 [2024-12-12 10:25:40.215166] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13852: invalid serial number 'Rlx(bn5hQ:U4=8R;oQ`0V' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:06.413 { 00:12:06.413 "nqn": "nqn.2016-06.io.spdk:cnode13852", 00:12:06.413 "serial_number": "Rlx(bn5hQ:U4=8R;oQ`0V", 00:12:06.413 "method": "nvmf_create_subsystem", 00:12:06.413 "req_id": 1 00:12:06.413 } 00:12:06.413 Got JSON-RPC error response 00:12:06.413 response: 00:12:06.413 { 00:12:06.413 "code": -32602, 00:12:06.413 "message": "Invalid SN Rlx(bn5hQ:U4=8R;oQ`0V" 00:12:06.413 }' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:06.413 { 00:12:06.413 "nqn": "nqn.2016-06.io.spdk:cnode13852", 00:12:06.413 "serial_number": "Rlx(bn5hQ:U4=8R;oQ`0V", 00:12:06.413 "method": "nvmf_create_subsystem", 00:12:06.413 "req_id": 1 00:12:06.413 } 00:12:06.413 Got JSON-RPC error response 00:12:06.413 response: 00:12:06.413 { 00:12:06.413 "code": -32602, 00:12:06.413 "message": "Invalid SN Rlx(bn5hQ:U4=8R;oQ`0V" 00:12:06.413 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:06.413 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.414 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:06.673 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:12:06.674 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'lI1&Y#Z-(9CTmZE-]B52"v=$NKgV9+yjCFUhRK* /dev/null' 00:12:08.744 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:11.280 00:12:11.280 real 0m11.976s 00:12:11.280 user 0m18.616s 00:12:11.280 sys 0m5.347s 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:11.280 ************************************ 00:12:11.280 END TEST nvmf_invalid 00:12:11.280 ************************************ 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.280 ************************************ 00:12:11.280 START TEST nvmf_connect_stress 00:12:11.280 ************************************ 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:11.280 * Looking for test storage... 00:12:11.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:11.280 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:11.280 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.281 --rc genhtml_branch_coverage=1 00:12:11.281 --rc genhtml_function_coverage=1 00:12:11.281 --rc genhtml_legend=1 00:12:11.281 --rc geninfo_all_blocks=1 00:12:11.281 --rc geninfo_unexecuted_blocks=1 00:12:11.281 00:12:11.281 ' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.281 --rc genhtml_branch_coverage=1 00:12:11.281 --rc genhtml_function_coverage=1 00:12:11.281 --rc genhtml_legend=1 00:12:11.281 --rc geninfo_all_blocks=1 00:12:11.281 --rc geninfo_unexecuted_blocks=1 00:12:11.281 00:12:11.281 ' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.281 --rc genhtml_branch_coverage=1 00:12:11.281 --rc genhtml_function_coverage=1 00:12:11.281 --rc genhtml_legend=1 00:12:11.281 --rc geninfo_all_blocks=1 00:12:11.281 --rc geninfo_unexecuted_blocks=1 00:12:11.281 00:12:11.281 ' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.281 --rc genhtml_branch_coverage=1 00:12:11.281 --rc genhtml_function_coverage=1 00:12:11.281 --rc genhtml_legend=1 00:12:11.281 --rc geninfo_all_blocks=1 00:12:11.281 --rc geninfo_unexecuted_blocks=1 00:12:11.281 00:12:11.281 ' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.281 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.282 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.282 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:11.282 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:11.282 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:11.282 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:17.852 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:17.852 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.852 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:17.853 Found net devices under 0000:af:00.0: cvl_0_0 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:17.853 Found net devices under 0000:af:00.1: cvl_0_1 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:17.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:12:17.853 00:12:17.853 --- 10.0.0.2 ping statistics --- 00:12:17.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.853 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:12:17.853 00:12:17.853 --- 10.0.0.1 ping statistics --- 00:12:17.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.853 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:17.853 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1461019 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1461019 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1461019 ']' 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.853 [2024-12-12 10:25:51.087392] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:17.853 [2024-12-12 10:25:51.087436] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.853 [2024-12-12 10:25:51.165038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:17.853 [2024-12-12 10:25:51.205872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.853 [2024-12-12 10:25:51.205906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.853 [2024-12-12 10:25:51.205914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.853 [2024-12-12 10:25:51.205921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.853 [2024-12-12 10:25:51.205926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.853 [2024-12-12 10:25:51.207229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.853 [2024-12-12 10:25:51.207257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.853 [2024-12-12 10:25:51.207257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.853 [2024-12-12 10:25:51.343006] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.853 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.854 [2024-12-12 10:25:51.367233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.854 NULL1 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1461193 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.854 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.112 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.112 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:18.112 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.112 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.112 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.678 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.678 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:18.678 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.678 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.678 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.936 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.936 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:18.936 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.936 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.936 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.194 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.194 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:19.194 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.194 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.194 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.452 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.452 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:19.452 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.452 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.452 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.020 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.020 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:20.020 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.020 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.020 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.279 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.279 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:20.279 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.279 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.279 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.538 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.538 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:20.538 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.538 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.538 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.796 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.796 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:20.796 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.796 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.796 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.055 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.055 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:21.055 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.055 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.055 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.619 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.619 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:21.619 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.619 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.619 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.877 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.877 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:21.877 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.877 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.877 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.136 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.136 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:22.136 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.136 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.136 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.394 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.394 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:22.394 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.394 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.394 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.653 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.653 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:22.653 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.653 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.653 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.220 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.220 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:23.220 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.220 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.220 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.478 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.478 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:23.478 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.478 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.478 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.737 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.737 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:23.737 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.737 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.737 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.995 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.995 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:23.995 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.995 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.995 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:24.562 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.562 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.562 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.820 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.820 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:24.820 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.820 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.820 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.078 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.079 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:25.079 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.079 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.079 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.337 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.337 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:25.337 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.337 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.337 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.595 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.595 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:25.595 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.595 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.595 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.162 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.162 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:26.162 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.162 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.162 10:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:26.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.677 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.677 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:26.677 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.677 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.677 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.935 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.935 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:26.935 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.935 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.935 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.500 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.500 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:27.500 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.500 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.500 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.758 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.758 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:27.758 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.758 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.758 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.758 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:28.017 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.017 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461193 00:12:28.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1461193) - No such process 00:12:28.017 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1461193 00:12:28.017 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:28.017 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:28.017 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.018 rmmod nvme_tcp 00:12:28.018 rmmod nvme_fabrics 00:12:28.018 rmmod nvme_keyring 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1461019 ']' 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1461019 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1461019 ']' 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1461019 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.018 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1461019 00:12:28.018 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:28.018 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:28.018 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1461019' 00:12:28.018 killing process with pid 1461019 00:12:28.018 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1461019 00:12:28.018 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1461019 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.277 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.810 00:12:30.810 real 0m19.357s 00:12:30.810 user 0m40.490s 00:12:30.810 sys 0m8.556s 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.810 ************************************ 00:12:30.810 END TEST nvmf_connect_stress 00:12:30.810 ************************************ 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.810 ************************************ 00:12:30.810 START TEST nvmf_fused_ordering 00:12:30.810 ************************************ 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:30.810 * Looking for test storage... 00:12:30.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:30.810 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:30.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.811 --rc genhtml_branch_coverage=1 00:12:30.811 --rc genhtml_function_coverage=1 00:12:30.811 --rc genhtml_legend=1 00:12:30.811 --rc geninfo_all_blocks=1 00:12:30.811 --rc geninfo_unexecuted_blocks=1 00:12:30.811 00:12:30.811 ' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:30.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.811 --rc genhtml_branch_coverage=1 00:12:30.811 --rc genhtml_function_coverage=1 00:12:30.811 --rc genhtml_legend=1 00:12:30.811 --rc geninfo_all_blocks=1 00:12:30.811 --rc geninfo_unexecuted_blocks=1 00:12:30.811 00:12:30.811 ' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:30.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.811 --rc genhtml_branch_coverage=1 00:12:30.811 --rc genhtml_function_coverage=1 00:12:30.811 --rc genhtml_legend=1 00:12:30.811 --rc geninfo_all_blocks=1 00:12:30.811 --rc geninfo_unexecuted_blocks=1 00:12:30.811 00:12:30.811 ' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:30.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.811 --rc genhtml_branch_coverage=1 00:12:30.811 --rc genhtml_function_coverage=1 00:12:30.811 --rc genhtml_legend=1 00:12:30.811 --rc geninfo_all_blocks=1 00:12:30.811 --rc geninfo_unexecuted_blocks=1 00:12:30.811 00:12:30.811 ' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.811 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.812 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.812 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.812 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.812 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.812 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.812 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.812 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.812 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:36.082 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:36.082 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:36.082 Found net devices under 0000:af:00.0: cvl_0_0 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:36.082 Found net devices under 0000:af:00.1: cvl_0_1 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:36.082 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.083 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:12:36.342 00:12:36.342 --- 10.0.0.2 ping statistics --- 00:12:36.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.342 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:12:36.342 00:12:36.342 --- 10.0.0.1 ping statistics --- 00:12:36.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.342 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:36.342 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.601 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:36.601 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.601 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.601 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.602 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1466309 00:12:36.602 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:36.602 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1466309 00:12:36.602 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1466309 ']' 00:12:36.602 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.602 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.602 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.602 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.602 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.602 [2024-12-12 10:26:10.428769] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:36.602 [2024-12-12 10:26:10.428818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.602 [2024-12-12 10:26:10.507291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.602 [2024-12-12 10:26:10.547156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.602 [2024-12-12 10:26:10.547190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.602 [2024-12-12 10:26:10.547198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.602 [2024-12-12 10:26:10.547205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.602 [2024-12-12 10:26:10.547211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.602 [2024-12-12 10:26:10.547722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.861 [2024-12-12 10:26:10.692253] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.861 [2024-12-12 10:26:10.712431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.861 NULL1 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.861 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.862 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:36.862 [2024-12-12 10:26:10.772237] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:36.862 [2024-12-12 10:26:10.772280] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466454 ] 00:12:37.121 Attached to nqn.2016-06.io.spdk:cnode1 00:12:37.121 Namespace ID: 1 size: 1GB 00:12:37.121 fused_ordering(0) 00:12:37.121 fused_ordering(1) 00:12:37.121 fused_ordering(2) 00:12:37.121 fused_ordering(3) 00:12:37.121 fused_ordering(4) 00:12:37.121 fused_ordering(5) 00:12:37.121 fused_ordering(6) 00:12:37.121 fused_ordering(7) 00:12:37.121 fused_ordering(8) 00:12:37.121 fused_ordering(9) 00:12:37.121 fused_ordering(10) 00:12:37.121 fused_ordering(11) 00:12:37.121 fused_ordering(12) 00:12:37.121 fused_ordering(13) 00:12:37.121 fused_ordering(14) 00:12:37.121 fused_ordering(15) 00:12:37.121 fused_ordering(16) 00:12:37.121 fused_ordering(17) 00:12:37.121 fused_ordering(18) 00:12:37.121 fused_ordering(19) 00:12:37.121 fused_ordering(20) 00:12:37.121 fused_ordering(21) 00:12:37.121 fused_ordering(22) 00:12:37.121 fused_ordering(23) 00:12:37.121 fused_ordering(24) 00:12:37.121 fused_ordering(25) 00:12:37.121 fused_ordering(26) 00:12:37.121 fused_ordering(27) 00:12:37.121 fused_ordering(28) 00:12:37.121 fused_ordering(29) 00:12:37.121 fused_ordering(30) 00:12:37.121 fused_ordering(31) 00:12:37.121 fused_ordering(32) 00:12:37.121 fused_ordering(33) 00:12:37.121 fused_ordering(34) 00:12:37.121 fused_ordering(35) 00:12:37.121 fused_ordering(36) 00:12:37.121 fused_ordering(37) 00:12:37.121 fused_ordering(38) 00:12:37.121 fused_ordering(39) 00:12:37.121 fused_ordering(40) 00:12:37.121 fused_ordering(41) 00:12:37.121 fused_ordering(42) 00:12:37.121 fused_ordering(43) 00:12:37.121 fused_ordering(44) 00:12:37.121 fused_ordering(45) 00:12:37.121 fused_ordering(46) 00:12:37.121 fused_ordering(47) 00:12:37.121 fused_ordering(48) 00:12:37.121 fused_ordering(49) 00:12:37.121 fused_ordering(50) 00:12:37.121 fused_ordering(51) 00:12:37.121 fused_ordering(52) 00:12:37.121 fused_ordering(53) 00:12:37.121 fused_ordering(54) 00:12:37.121 fused_ordering(55) 00:12:37.121 fused_ordering(56) 00:12:37.121 fused_ordering(57) 00:12:37.121 fused_ordering(58) 00:12:37.121 fused_ordering(59) 00:12:37.121 fused_ordering(60) 00:12:37.121 fused_ordering(61) 00:12:37.121 fused_ordering(62) 00:12:37.121 fused_ordering(63) 00:12:37.121 fused_ordering(64) 00:12:37.121 fused_ordering(65) 00:12:37.121 fused_ordering(66) 00:12:37.121 fused_ordering(67) 00:12:37.121 fused_ordering(68) 00:12:37.121 fused_ordering(69) 00:12:37.121 fused_ordering(70) 00:12:37.121 fused_ordering(71) 00:12:37.121 fused_ordering(72) 00:12:37.121 fused_ordering(73) 00:12:37.121 fused_ordering(74) 00:12:37.121 fused_ordering(75) 00:12:37.121 fused_ordering(76) 00:12:37.121 fused_ordering(77) 00:12:37.121 fused_ordering(78) 00:12:37.121 fused_ordering(79) 00:12:37.121 fused_ordering(80) 00:12:37.121 fused_ordering(81) 00:12:37.121 fused_ordering(82) 00:12:37.121 fused_ordering(83) 00:12:37.121 fused_ordering(84) 00:12:37.121 fused_ordering(85) 00:12:37.121 fused_ordering(86) 00:12:37.121 fused_ordering(87) 00:12:37.121 fused_ordering(88) 00:12:37.121 fused_ordering(89) 00:12:37.121 fused_ordering(90) 00:12:37.121 fused_ordering(91) 00:12:37.121 fused_ordering(92) 00:12:37.121 fused_ordering(93) 00:12:37.121 fused_ordering(94) 00:12:37.121 fused_ordering(95) 00:12:37.121 fused_ordering(96) 00:12:37.121 fused_ordering(97) 00:12:37.121 fused_ordering(98) 00:12:37.121 fused_ordering(99) 00:12:37.121 fused_ordering(100) 00:12:37.121 fused_ordering(101) 00:12:37.121 fused_ordering(102) 00:12:37.121 fused_ordering(103) 00:12:37.121 fused_ordering(104) 00:12:37.121 fused_ordering(105) 00:12:37.121 fused_ordering(106) 00:12:37.121 fused_ordering(107) 00:12:37.121 fused_ordering(108) 00:12:37.121 fused_ordering(109) 00:12:37.121 fused_ordering(110) 00:12:37.121 fused_ordering(111) 00:12:37.121 fused_ordering(112) 00:12:37.121 fused_ordering(113) 00:12:37.121 fused_ordering(114) 00:12:37.121 fused_ordering(115) 00:12:37.121 fused_ordering(116) 00:12:37.121 fused_ordering(117) 00:12:37.121 fused_ordering(118) 00:12:37.121 fused_ordering(119) 00:12:37.121 fused_ordering(120) 00:12:37.121 fused_ordering(121) 00:12:37.121 fused_ordering(122) 00:12:37.121 fused_ordering(123) 00:12:37.121 fused_ordering(124) 00:12:37.121 fused_ordering(125) 00:12:37.121 fused_ordering(126) 00:12:37.121 fused_ordering(127) 00:12:37.121 fused_ordering(128) 00:12:37.121 fused_ordering(129) 00:12:37.121 fused_ordering(130) 00:12:37.121 fused_ordering(131) 00:12:37.121 fused_ordering(132) 00:12:37.121 fused_ordering(133) 00:12:37.121 fused_ordering(134) 00:12:37.121 fused_ordering(135) 00:12:37.121 fused_ordering(136) 00:12:37.121 fused_ordering(137) 00:12:37.121 fused_ordering(138) 00:12:37.121 fused_ordering(139) 00:12:37.121 fused_ordering(140) 00:12:37.121 fused_ordering(141) 00:12:37.121 fused_ordering(142) 00:12:37.121 fused_ordering(143) 00:12:37.121 fused_ordering(144) 00:12:37.121 fused_ordering(145) 00:12:37.121 fused_ordering(146) 00:12:37.121 fused_ordering(147) 00:12:37.121 fused_ordering(148) 00:12:37.121 fused_ordering(149) 00:12:37.121 fused_ordering(150) 00:12:37.121 fused_ordering(151) 00:12:37.121 fused_ordering(152) 00:12:37.121 fused_ordering(153) 00:12:37.121 fused_ordering(154) 00:12:37.121 fused_ordering(155) 00:12:37.121 fused_ordering(156) 00:12:37.121 fused_ordering(157) 00:12:37.121 fused_ordering(158) 00:12:37.121 fused_ordering(159) 00:12:37.121 fused_ordering(160) 00:12:37.121 fused_ordering(161) 00:12:37.121 fused_ordering(162) 00:12:37.121 fused_ordering(163) 00:12:37.121 fused_ordering(164) 00:12:37.121 fused_ordering(165) 00:12:37.121 fused_ordering(166) 00:12:37.121 fused_ordering(167) 00:12:37.121 fused_ordering(168) 00:12:37.121 fused_ordering(169) 00:12:37.121 fused_ordering(170) 00:12:37.121 fused_ordering(171) 00:12:37.121 fused_ordering(172) 00:12:37.121 fused_ordering(173) 00:12:37.121 fused_ordering(174) 00:12:37.121 fused_ordering(175) 00:12:37.121 fused_ordering(176) 00:12:37.121 fused_ordering(177) 00:12:37.121 fused_ordering(178) 00:12:37.121 fused_ordering(179) 00:12:37.121 fused_ordering(180) 00:12:37.121 fused_ordering(181) 00:12:37.121 fused_ordering(182) 00:12:37.121 fused_ordering(183) 00:12:37.121 fused_ordering(184) 00:12:37.121 fused_ordering(185) 00:12:37.121 fused_ordering(186) 00:12:37.121 fused_ordering(187) 00:12:37.121 fused_ordering(188) 00:12:37.121 fused_ordering(189) 00:12:37.121 fused_ordering(190) 00:12:37.122 fused_ordering(191) 00:12:37.122 fused_ordering(192) 00:12:37.122 fused_ordering(193) 00:12:37.122 fused_ordering(194) 00:12:37.122 fused_ordering(195) 00:12:37.122 fused_ordering(196) 00:12:37.122 fused_ordering(197) 00:12:37.122 fused_ordering(198) 00:12:37.122 fused_ordering(199) 00:12:37.122 fused_ordering(200) 00:12:37.122 fused_ordering(201) 00:12:37.122 fused_ordering(202) 00:12:37.122 fused_ordering(203) 00:12:37.122 fused_ordering(204) 00:12:37.122 fused_ordering(205) 00:12:37.381 fused_ordering(206) 00:12:37.381 fused_ordering(207) 00:12:37.381 fused_ordering(208) 00:12:37.381 fused_ordering(209) 00:12:37.381 fused_ordering(210) 00:12:37.381 fused_ordering(211) 00:12:37.381 fused_ordering(212) 00:12:37.381 fused_ordering(213) 00:12:37.381 fused_ordering(214) 00:12:37.381 fused_ordering(215) 00:12:37.381 fused_ordering(216) 00:12:37.381 fused_ordering(217) 00:12:37.381 fused_ordering(218) 00:12:37.381 fused_ordering(219) 00:12:37.381 fused_ordering(220) 00:12:37.381 fused_ordering(221) 00:12:37.381 fused_ordering(222) 00:12:37.381 fused_ordering(223) 00:12:37.381 fused_ordering(224) 00:12:37.381 fused_ordering(225) 00:12:37.381 fused_ordering(226) 00:12:37.381 fused_ordering(227) 00:12:37.381 fused_ordering(228) 00:12:37.381 fused_ordering(229) 00:12:37.381 fused_ordering(230) 00:12:37.381 fused_ordering(231) 00:12:37.381 fused_ordering(232) 00:12:37.381 fused_ordering(233) 00:12:37.381 fused_ordering(234) 00:12:37.381 fused_ordering(235) 00:12:37.381 fused_ordering(236) 00:12:37.381 fused_ordering(237) 00:12:37.381 fused_ordering(238) 00:12:37.381 fused_ordering(239) 00:12:37.381 fused_ordering(240) 00:12:37.381 fused_ordering(241) 00:12:37.381 fused_ordering(242) 00:12:37.381 fused_ordering(243) 00:12:37.381 fused_ordering(244) 00:12:37.381 fused_ordering(245) 00:12:37.381 fused_ordering(246) 00:12:37.381 fused_ordering(247) 00:12:37.381 fused_ordering(248) 00:12:37.381 fused_ordering(249) 00:12:37.381 fused_ordering(250) 00:12:37.381 fused_ordering(251) 00:12:37.381 fused_ordering(252) 00:12:37.381 fused_ordering(253) 00:12:37.381 fused_ordering(254) 00:12:37.381 fused_ordering(255) 00:12:37.381 fused_ordering(256) 00:12:37.381 fused_ordering(257) 00:12:37.381 fused_ordering(258) 00:12:37.381 fused_ordering(259) 00:12:37.381 fused_ordering(260) 00:12:37.381 fused_ordering(261) 00:12:37.381 fused_ordering(262) 00:12:37.381 fused_ordering(263) 00:12:37.381 fused_ordering(264) 00:12:37.381 fused_ordering(265) 00:12:37.381 fused_ordering(266) 00:12:37.381 fused_ordering(267) 00:12:37.381 fused_ordering(268) 00:12:37.381 fused_ordering(269) 00:12:37.381 fused_ordering(270) 00:12:37.381 fused_ordering(271) 00:12:37.381 fused_ordering(272) 00:12:37.381 fused_ordering(273) 00:12:37.381 fused_ordering(274) 00:12:37.381 fused_ordering(275) 00:12:37.381 fused_ordering(276) 00:12:37.381 fused_ordering(277) 00:12:37.381 fused_ordering(278) 00:12:37.381 fused_ordering(279) 00:12:37.381 fused_ordering(280) 00:12:37.381 fused_ordering(281) 00:12:37.381 fused_ordering(282) 00:12:37.381 fused_ordering(283) 00:12:37.381 fused_ordering(284) 00:12:37.381 fused_ordering(285) 00:12:37.381 fused_ordering(286) 00:12:37.381 fused_ordering(287) 00:12:37.381 fused_ordering(288) 00:12:37.381 fused_ordering(289) 00:12:37.381 fused_ordering(290) 00:12:37.381 fused_ordering(291) 00:12:37.381 fused_ordering(292) 00:12:37.381 fused_ordering(293) 00:12:37.381 fused_ordering(294) 00:12:37.381 fused_ordering(295) 00:12:37.381 fused_ordering(296) 00:12:37.381 fused_ordering(297) 00:12:37.381 fused_ordering(298) 00:12:37.381 fused_ordering(299) 00:12:37.381 fused_ordering(300) 00:12:37.381 fused_ordering(301) 00:12:37.381 fused_ordering(302) 00:12:37.381 fused_ordering(303) 00:12:37.381 fused_ordering(304) 00:12:37.381 fused_ordering(305) 00:12:37.381 fused_ordering(306) 00:12:37.381 fused_ordering(307) 00:12:37.381 fused_ordering(308) 00:12:37.381 fused_ordering(309) 00:12:37.381 fused_ordering(310) 00:12:37.381 fused_ordering(311) 00:12:37.381 fused_ordering(312) 00:12:37.381 fused_ordering(313) 00:12:37.381 fused_ordering(314) 00:12:37.381 fused_ordering(315) 00:12:37.381 fused_ordering(316) 00:12:37.381 fused_ordering(317) 00:12:37.381 fused_ordering(318) 00:12:37.381 fused_ordering(319) 00:12:37.381 fused_ordering(320) 00:12:37.381 fused_ordering(321) 00:12:37.381 fused_ordering(322) 00:12:37.381 fused_ordering(323) 00:12:37.381 fused_ordering(324) 00:12:37.381 fused_ordering(325) 00:12:37.381 fused_ordering(326) 00:12:37.381 fused_ordering(327) 00:12:37.381 fused_ordering(328) 00:12:37.381 fused_ordering(329) 00:12:37.381 fused_ordering(330) 00:12:37.381 fused_ordering(331) 00:12:37.381 fused_ordering(332) 00:12:37.381 fused_ordering(333) 00:12:37.381 fused_ordering(334) 00:12:37.381 fused_ordering(335) 00:12:37.381 fused_ordering(336) 00:12:37.381 fused_ordering(337) 00:12:37.381 fused_ordering(338) 00:12:37.381 fused_ordering(339) 00:12:37.381 fused_ordering(340) 00:12:37.381 fused_ordering(341) 00:12:37.381 fused_ordering(342) 00:12:37.381 fused_ordering(343) 00:12:37.381 fused_ordering(344) 00:12:37.381 fused_ordering(345) 00:12:37.381 fused_ordering(346) 00:12:37.381 fused_ordering(347) 00:12:37.381 fused_ordering(348) 00:12:37.381 fused_ordering(349) 00:12:37.381 fused_ordering(350) 00:12:37.381 fused_ordering(351) 00:12:37.381 fused_ordering(352) 00:12:37.381 fused_ordering(353) 00:12:37.381 fused_ordering(354) 00:12:37.381 fused_ordering(355) 00:12:37.381 fused_ordering(356) 00:12:37.381 fused_ordering(357) 00:12:37.381 fused_ordering(358) 00:12:37.381 fused_ordering(359) 00:12:37.381 fused_ordering(360) 00:12:37.381 fused_ordering(361) 00:12:37.381 fused_ordering(362) 00:12:37.381 fused_ordering(363) 00:12:37.381 fused_ordering(364) 00:12:37.381 fused_ordering(365) 00:12:37.381 fused_ordering(366) 00:12:37.381 fused_ordering(367) 00:12:37.381 fused_ordering(368) 00:12:37.381 fused_ordering(369) 00:12:37.381 fused_ordering(370) 00:12:37.381 fused_ordering(371) 00:12:37.381 fused_ordering(372) 00:12:37.381 fused_ordering(373) 00:12:37.381 fused_ordering(374) 00:12:37.381 fused_ordering(375) 00:12:37.381 fused_ordering(376) 00:12:37.381 fused_ordering(377) 00:12:37.381 fused_ordering(378) 00:12:37.381 fused_ordering(379) 00:12:37.381 fused_ordering(380) 00:12:37.381 fused_ordering(381) 00:12:37.381 fused_ordering(382) 00:12:37.381 fused_ordering(383) 00:12:37.381 fused_ordering(384) 00:12:37.381 fused_ordering(385) 00:12:37.381 fused_ordering(386) 00:12:37.381 fused_ordering(387) 00:12:37.381 fused_ordering(388) 00:12:37.381 fused_ordering(389) 00:12:37.381 fused_ordering(390) 00:12:37.381 fused_ordering(391) 00:12:37.381 fused_ordering(392) 00:12:37.381 fused_ordering(393) 00:12:37.381 fused_ordering(394) 00:12:37.381 fused_ordering(395) 00:12:37.381 fused_ordering(396) 00:12:37.381 fused_ordering(397) 00:12:37.381 fused_ordering(398) 00:12:37.381 fused_ordering(399) 00:12:37.381 fused_ordering(400) 00:12:37.381 fused_ordering(401) 00:12:37.381 fused_ordering(402) 00:12:37.381 fused_ordering(403) 00:12:37.381 fused_ordering(404) 00:12:37.381 fused_ordering(405) 00:12:37.381 fused_ordering(406) 00:12:37.381 fused_ordering(407) 00:12:37.381 fused_ordering(408) 00:12:37.381 fused_ordering(409) 00:12:37.381 fused_ordering(410) 00:12:37.641 fused_ordering(411) 00:12:37.641 fused_ordering(412) 00:12:37.641 fused_ordering(413) 00:12:37.641 fused_ordering(414) 00:12:37.641 fused_ordering(415) 00:12:37.641 fused_ordering(416) 00:12:37.641 fused_ordering(417) 00:12:37.641 fused_ordering(418) 00:12:37.641 fused_ordering(419) 00:12:37.641 fused_ordering(420) 00:12:37.641 fused_ordering(421) 00:12:37.641 fused_ordering(422) 00:12:37.641 fused_ordering(423) 00:12:37.641 fused_ordering(424) 00:12:37.641 fused_ordering(425) 00:12:37.641 fused_ordering(426) 00:12:37.641 fused_ordering(427) 00:12:37.641 fused_ordering(428) 00:12:37.641 fused_ordering(429) 00:12:37.641 fused_ordering(430) 00:12:37.641 fused_ordering(431) 00:12:37.641 fused_ordering(432) 00:12:37.641 fused_ordering(433) 00:12:37.641 fused_ordering(434) 00:12:37.641 fused_ordering(435) 00:12:37.641 fused_ordering(436) 00:12:37.641 fused_ordering(437) 00:12:37.641 fused_ordering(438) 00:12:37.641 fused_ordering(439) 00:12:37.641 fused_ordering(440) 00:12:37.641 fused_ordering(441) 00:12:37.641 fused_ordering(442) 00:12:37.641 fused_ordering(443) 00:12:37.641 fused_ordering(444) 00:12:37.641 fused_ordering(445) 00:12:37.641 fused_ordering(446) 00:12:37.641 fused_ordering(447) 00:12:37.641 fused_ordering(448) 00:12:37.641 fused_ordering(449) 00:12:37.641 fused_ordering(450) 00:12:37.641 fused_ordering(451) 00:12:37.641 fused_ordering(452) 00:12:37.641 fused_ordering(453) 00:12:37.641 fused_ordering(454) 00:12:37.641 fused_ordering(455) 00:12:37.641 fused_ordering(456) 00:12:37.641 fused_ordering(457) 00:12:37.641 fused_ordering(458) 00:12:37.641 fused_ordering(459) 00:12:37.641 fused_ordering(460) 00:12:37.641 fused_ordering(461) 00:12:37.641 fused_ordering(462) 00:12:37.641 fused_ordering(463) 00:12:37.641 fused_ordering(464) 00:12:37.641 fused_ordering(465) 00:12:37.641 fused_ordering(466) 00:12:37.641 fused_ordering(467) 00:12:37.641 fused_ordering(468) 00:12:37.641 fused_ordering(469) 00:12:37.641 fused_ordering(470) 00:12:37.641 fused_ordering(471) 00:12:37.641 fused_ordering(472) 00:12:37.641 fused_ordering(473) 00:12:37.641 fused_ordering(474) 00:12:37.641 fused_ordering(475) 00:12:37.641 fused_ordering(476) 00:12:37.641 fused_ordering(477) 00:12:37.641 fused_ordering(478) 00:12:37.641 fused_ordering(479) 00:12:37.641 fused_ordering(480) 00:12:37.641 fused_ordering(481) 00:12:37.641 fused_ordering(482) 00:12:37.641 fused_ordering(483) 00:12:37.641 fused_ordering(484) 00:12:37.641 fused_ordering(485) 00:12:37.641 fused_ordering(486) 00:12:37.641 fused_ordering(487) 00:12:37.641 fused_ordering(488) 00:12:37.641 fused_ordering(489) 00:12:37.641 fused_ordering(490) 00:12:37.641 fused_ordering(491) 00:12:37.641 fused_ordering(492) 00:12:37.641 fused_ordering(493) 00:12:37.641 fused_ordering(494) 00:12:37.641 fused_ordering(495) 00:12:37.641 fused_ordering(496) 00:12:37.641 fused_ordering(497) 00:12:37.641 fused_ordering(498) 00:12:37.641 fused_ordering(499) 00:12:37.641 fused_ordering(500) 00:12:37.641 fused_ordering(501) 00:12:37.641 fused_ordering(502) 00:12:37.641 fused_ordering(503) 00:12:37.641 fused_ordering(504) 00:12:37.641 fused_ordering(505) 00:12:37.641 fused_ordering(506) 00:12:37.641 fused_ordering(507) 00:12:37.641 fused_ordering(508) 00:12:37.641 fused_ordering(509) 00:12:37.641 fused_ordering(510) 00:12:37.641 fused_ordering(511) 00:12:37.641 fused_ordering(512) 00:12:37.641 fused_ordering(513) 00:12:37.641 fused_ordering(514) 00:12:37.641 fused_ordering(515) 00:12:37.641 fused_ordering(516) 00:12:37.641 fused_ordering(517) 00:12:37.641 fused_ordering(518) 00:12:37.641 fused_ordering(519) 00:12:37.641 fused_ordering(520) 00:12:37.641 fused_ordering(521) 00:12:37.641 fused_ordering(522) 00:12:37.641 fused_ordering(523) 00:12:37.641 fused_ordering(524) 00:12:37.641 fused_ordering(525) 00:12:37.641 fused_ordering(526) 00:12:37.641 fused_ordering(527) 00:12:37.641 fused_ordering(528) 00:12:37.641 fused_ordering(529) 00:12:37.641 fused_ordering(530) 00:12:37.641 fused_ordering(531) 00:12:37.641 fused_ordering(532) 00:12:37.641 fused_ordering(533) 00:12:37.641 fused_ordering(534) 00:12:37.641 fused_ordering(535) 00:12:37.641 fused_ordering(536) 00:12:37.641 fused_ordering(537) 00:12:37.641 fused_ordering(538) 00:12:37.641 fused_ordering(539) 00:12:37.641 fused_ordering(540) 00:12:37.641 fused_ordering(541) 00:12:37.641 fused_ordering(542) 00:12:37.641 fused_ordering(543) 00:12:37.641 fused_ordering(544) 00:12:37.641 fused_ordering(545) 00:12:37.641 fused_ordering(546) 00:12:37.641 fused_ordering(547) 00:12:37.641 fused_ordering(548) 00:12:37.641 fused_ordering(549) 00:12:37.641 fused_ordering(550) 00:12:37.641 fused_ordering(551) 00:12:37.641 fused_ordering(552) 00:12:37.641 fused_ordering(553) 00:12:37.641 fused_ordering(554) 00:12:37.641 fused_ordering(555) 00:12:37.641 fused_ordering(556) 00:12:37.641 fused_ordering(557) 00:12:37.641 fused_ordering(558) 00:12:37.641 fused_ordering(559) 00:12:37.641 fused_ordering(560) 00:12:37.641 fused_ordering(561) 00:12:37.641 fused_ordering(562) 00:12:37.641 fused_ordering(563) 00:12:37.641 fused_ordering(564) 00:12:37.641 fused_ordering(565) 00:12:37.641 fused_ordering(566) 00:12:37.641 fused_ordering(567) 00:12:37.641 fused_ordering(568) 00:12:37.641 fused_ordering(569) 00:12:37.641 fused_ordering(570) 00:12:37.641 fused_ordering(571) 00:12:37.641 fused_ordering(572) 00:12:37.641 fused_ordering(573) 00:12:37.641 fused_ordering(574) 00:12:37.641 fused_ordering(575) 00:12:37.641 fused_ordering(576) 00:12:37.641 fused_ordering(577) 00:12:37.641 fused_ordering(578) 00:12:37.641 fused_ordering(579) 00:12:37.641 fused_ordering(580) 00:12:37.641 fused_ordering(581) 00:12:37.641 fused_ordering(582) 00:12:37.641 fused_ordering(583) 00:12:37.641 fused_ordering(584) 00:12:37.641 fused_ordering(585) 00:12:37.641 fused_ordering(586) 00:12:37.641 fused_ordering(587) 00:12:37.641 fused_ordering(588) 00:12:37.641 fused_ordering(589) 00:12:37.641 fused_ordering(590) 00:12:37.641 fused_ordering(591) 00:12:37.641 fused_ordering(592) 00:12:37.641 fused_ordering(593) 00:12:37.641 fused_ordering(594) 00:12:37.641 fused_ordering(595) 00:12:37.641 fused_ordering(596) 00:12:37.641 fused_ordering(597) 00:12:37.641 fused_ordering(598) 00:12:37.641 fused_ordering(599) 00:12:37.641 fused_ordering(600) 00:12:37.641 fused_ordering(601) 00:12:37.641 fused_ordering(602) 00:12:37.641 fused_ordering(603) 00:12:37.641 fused_ordering(604) 00:12:37.641 fused_ordering(605) 00:12:37.641 fused_ordering(606) 00:12:37.641 fused_ordering(607) 00:12:37.641 fused_ordering(608) 00:12:37.641 fused_ordering(609) 00:12:37.641 fused_ordering(610) 00:12:37.641 fused_ordering(611) 00:12:37.641 fused_ordering(612) 00:12:37.641 fused_ordering(613) 00:12:37.641 fused_ordering(614) 00:12:37.641 fused_ordering(615) 00:12:38.210 fused_ordering(616) 00:12:38.210 fused_ordering(617) 00:12:38.210 fused_ordering(618) 00:12:38.210 fused_ordering(619) 00:12:38.210 fused_ordering(620) 00:12:38.210 fused_ordering(621) 00:12:38.210 fused_ordering(622) 00:12:38.210 fused_ordering(623) 00:12:38.210 fused_ordering(624) 00:12:38.210 fused_ordering(625) 00:12:38.210 fused_ordering(626) 00:12:38.210 fused_ordering(627) 00:12:38.210 fused_ordering(628) 00:12:38.210 fused_ordering(629) 00:12:38.210 fused_ordering(630) 00:12:38.210 fused_ordering(631) 00:12:38.210 fused_ordering(632) 00:12:38.210 fused_ordering(633) 00:12:38.210 fused_ordering(634) 00:12:38.210 fused_ordering(635) 00:12:38.210 fused_ordering(636) 00:12:38.210 fused_ordering(637) 00:12:38.210 fused_ordering(638) 00:12:38.210 fused_ordering(639) 00:12:38.210 fused_ordering(640) 00:12:38.210 fused_ordering(641) 00:12:38.210 fused_ordering(642) 00:12:38.210 fused_ordering(643) 00:12:38.210 fused_ordering(644) 00:12:38.210 fused_ordering(645) 00:12:38.210 fused_ordering(646) 00:12:38.210 fused_ordering(647) 00:12:38.210 fused_ordering(648) 00:12:38.210 fused_ordering(649) 00:12:38.210 fused_ordering(650) 00:12:38.210 fused_ordering(651) 00:12:38.210 fused_ordering(652) 00:12:38.210 fused_ordering(653) 00:12:38.210 fused_ordering(654) 00:12:38.210 fused_ordering(655) 00:12:38.210 fused_ordering(656) 00:12:38.210 fused_ordering(657) 00:12:38.210 fused_ordering(658) 00:12:38.210 fused_ordering(659) 00:12:38.210 fused_ordering(660) 00:12:38.210 fused_ordering(661) 00:12:38.210 fused_ordering(662) 00:12:38.210 fused_ordering(663) 00:12:38.210 fused_ordering(664) 00:12:38.210 fused_ordering(665) 00:12:38.210 fused_ordering(666) 00:12:38.210 fused_ordering(667) 00:12:38.210 fused_ordering(668) 00:12:38.210 fused_ordering(669) 00:12:38.210 fused_ordering(670) 00:12:38.210 fused_ordering(671) 00:12:38.210 fused_ordering(672) 00:12:38.210 fused_ordering(673) 00:12:38.210 fused_ordering(674) 00:12:38.210 fused_ordering(675) 00:12:38.210 fused_ordering(676) 00:12:38.210 fused_ordering(677) 00:12:38.210 fused_ordering(678) 00:12:38.210 fused_ordering(679) 00:12:38.210 fused_ordering(680) 00:12:38.210 fused_ordering(681) 00:12:38.210 fused_ordering(682) 00:12:38.210 fused_ordering(683) 00:12:38.210 fused_ordering(684) 00:12:38.210 fused_ordering(685) 00:12:38.210 fused_ordering(686) 00:12:38.210 fused_ordering(687) 00:12:38.210 fused_ordering(688) 00:12:38.210 fused_ordering(689) 00:12:38.210 fused_ordering(690) 00:12:38.210 fused_ordering(691) 00:12:38.210 fused_ordering(692) 00:12:38.210 fused_ordering(693) 00:12:38.210 fused_ordering(694) 00:12:38.210 fused_ordering(695) 00:12:38.210 fused_ordering(696) 00:12:38.210 fused_ordering(697) 00:12:38.210 fused_ordering(698) 00:12:38.210 fused_ordering(699) 00:12:38.210 fused_ordering(700) 00:12:38.210 fused_ordering(701) 00:12:38.210 fused_ordering(702) 00:12:38.210 fused_ordering(703) 00:12:38.210 fused_ordering(704) 00:12:38.210 fused_ordering(705) 00:12:38.210 fused_ordering(706) 00:12:38.210 fused_ordering(707) 00:12:38.210 fused_ordering(708) 00:12:38.210 fused_ordering(709) 00:12:38.210 fused_ordering(710) 00:12:38.210 fused_ordering(711) 00:12:38.210 fused_ordering(712) 00:12:38.210 fused_ordering(713) 00:12:38.210 fused_ordering(714) 00:12:38.210 fused_ordering(715) 00:12:38.210 fused_ordering(716) 00:12:38.210 fused_ordering(717) 00:12:38.210 fused_ordering(718) 00:12:38.210 fused_ordering(719) 00:12:38.210 fused_ordering(720) 00:12:38.210 fused_ordering(721) 00:12:38.210 fused_ordering(722) 00:12:38.210 fused_ordering(723) 00:12:38.210 fused_ordering(724) 00:12:38.210 fused_ordering(725) 00:12:38.210 fused_ordering(726) 00:12:38.210 fused_ordering(727) 00:12:38.210 fused_ordering(728) 00:12:38.210 fused_ordering(729) 00:12:38.210 fused_ordering(730) 00:12:38.210 fused_ordering(731) 00:12:38.210 fused_ordering(732) 00:12:38.210 fused_ordering(733) 00:12:38.210 fused_ordering(734) 00:12:38.210 fused_ordering(735) 00:12:38.210 fused_ordering(736) 00:12:38.210 fused_ordering(737) 00:12:38.210 fused_ordering(738) 00:12:38.210 fused_ordering(739) 00:12:38.210 fused_ordering(740) 00:12:38.210 fused_ordering(741) 00:12:38.210 fused_ordering(742) 00:12:38.210 fused_ordering(743) 00:12:38.210 fused_ordering(744) 00:12:38.210 fused_ordering(745) 00:12:38.210 fused_ordering(746) 00:12:38.210 fused_ordering(747) 00:12:38.210 fused_ordering(748) 00:12:38.210 fused_ordering(749) 00:12:38.210 fused_ordering(750) 00:12:38.210 fused_ordering(751) 00:12:38.210 fused_ordering(752) 00:12:38.210 fused_ordering(753) 00:12:38.210 fused_ordering(754) 00:12:38.210 fused_ordering(755) 00:12:38.210 fused_ordering(756) 00:12:38.210 fused_ordering(757) 00:12:38.210 fused_ordering(758) 00:12:38.210 fused_ordering(759) 00:12:38.210 fused_ordering(760) 00:12:38.210 fused_ordering(761) 00:12:38.210 fused_ordering(762) 00:12:38.210 fused_ordering(763) 00:12:38.210 fused_ordering(764) 00:12:38.210 fused_ordering(765) 00:12:38.210 fused_ordering(766) 00:12:38.210 fused_ordering(767) 00:12:38.210 fused_ordering(768) 00:12:38.210 fused_ordering(769) 00:12:38.210 fused_ordering(770) 00:12:38.210 fused_ordering(771) 00:12:38.210 fused_ordering(772) 00:12:38.210 fused_ordering(773) 00:12:38.210 fused_ordering(774) 00:12:38.210 fused_ordering(775) 00:12:38.210 fused_ordering(776) 00:12:38.210 fused_ordering(777) 00:12:38.210 fused_ordering(778) 00:12:38.210 fused_ordering(779) 00:12:38.210 fused_ordering(780) 00:12:38.210 fused_ordering(781) 00:12:38.210 fused_ordering(782) 00:12:38.210 fused_ordering(783) 00:12:38.210 fused_ordering(784) 00:12:38.210 fused_ordering(785) 00:12:38.210 fused_ordering(786) 00:12:38.210 fused_ordering(787) 00:12:38.210 fused_ordering(788) 00:12:38.210 fused_ordering(789) 00:12:38.210 fused_ordering(790) 00:12:38.210 fused_ordering(791) 00:12:38.210 fused_ordering(792) 00:12:38.210 fused_ordering(793) 00:12:38.210 fused_ordering(794) 00:12:38.210 fused_ordering(795) 00:12:38.210 fused_ordering(796) 00:12:38.210 fused_ordering(797) 00:12:38.210 fused_ordering(798) 00:12:38.210 fused_ordering(799) 00:12:38.210 fused_ordering(800) 00:12:38.210 fused_ordering(801) 00:12:38.210 fused_ordering(802) 00:12:38.210 fused_ordering(803) 00:12:38.210 fused_ordering(804) 00:12:38.210 fused_ordering(805) 00:12:38.210 fused_ordering(806) 00:12:38.210 fused_ordering(807) 00:12:38.210 fused_ordering(808) 00:12:38.210 fused_ordering(809) 00:12:38.210 fused_ordering(810) 00:12:38.210 fused_ordering(811) 00:12:38.210 fused_ordering(812) 00:12:38.210 fused_ordering(813) 00:12:38.210 fused_ordering(814) 00:12:38.210 fused_ordering(815) 00:12:38.210 fused_ordering(816) 00:12:38.210 fused_ordering(817) 00:12:38.210 fused_ordering(818) 00:12:38.210 fused_ordering(819) 00:12:38.210 fused_ordering(820) 00:12:38.858 fused_ordering(821) 00:12:38.858 fused_ordering(822) 00:12:38.858 fused_ordering(823) 00:12:38.858 fused_ordering(824) 00:12:38.858 fused_ordering(825) 00:12:38.858 fused_ordering(826) 00:12:38.858 fused_ordering(827) 00:12:38.858 fused_ordering(828) 00:12:38.858 fused_ordering(829) 00:12:38.858 fused_ordering(830) 00:12:38.858 fused_ordering(831) 00:12:38.858 fused_ordering(832) 00:12:38.858 fused_ordering(833) 00:12:38.858 fused_ordering(834) 00:12:38.858 fused_ordering(835) 00:12:38.858 fused_ordering(836) 00:12:38.858 fused_ordering(837) 00:12:38.858 fused_ordering(838) 00:12:38.858 fused_ordering(839) 00:12:38.858 fused_ordering(840) 00:12:38.858 fused_ordering(841) 00:12:38.858 fused_ordering(842) 00:12:38.858 fused_ordering(843) 00:12:38.858 fused_ordering(844) 00:12:38.858 fused_ordering(845) 00:12:38.858 fused_ordering(846) 00:12:38.858 fused_ordering(847) 00:12:38.858 fused_ordering(848) 00:12:38.858 fused_ordering(849) 00:12:38.858 fused_ordering(850) 00:12:38.858 fused_ordering(851) 00:12:38.858 fused_ordering(852) 00:12:38.858 fused_ordering(853) 00:12:38.858 fused_ordering(854) 00:12:38.858 fused_ordering(855) 00:12:38.858 fused_ordering(856) 00:12:38.858 fused_ordering(857) 00:12:38.858 fused_ordering(858) 00:12:38.858 fused_ordering(859) 00:12:38.858 fused_ordering(860) 00:12:38.858 fused_ordering(861) 00:12:38.858 fused_ordering(862) 00:12:38.858 fused_ordering(863) 00:12:38.858 fused_ordering(864) 00:12:38.858 fused_ordering(865) 00:12:38.858 fused_ordering(866) 00:12:38.858 fused_ordering(867) 00:12:38.858 fused_ordering(868) 00:12:38.858 fused_ordering(869) 00:12:38.858 fused_ordering(870) 00:12:38.858 fused_ordering(871) 00:12:38.858 fused_ordering(872) 00:12:38.858 fused_ordering(873) 00:12:38.858 fused_ordering(874) 00:12:38.858 fused_ordering(875) 00:12:38.858 fused_ordering(876) 00:12:38.858 fused_ordering(877) 00:12:38.858 fused_ordering(878) 00:12:38.858 fused_ordering(879) 00:12:38.858 fused_ordering(880) 00:12:38.858 fused_ordering(881) 00:12:38.858 fused_ordering(882) 00:12:38.858 fused_ordering(883) 00:12:38.858 fused_ordering(884) 00:12:38.858 fused_ordering(885) 00:12:38.858 fused_ordering(886) 00:12:38.858 fused_ordering(887) 00:12:38.858 fused_ordering(888) 00:12:38.858 fused_ordering(889) 00:12:38.858 fused_ordering(890) 00:12:38.858 fused_ordering(891) 00:12:38.858 fused_ordering(892) 00:12:38.858 fused_ordering(893) 00:12:38.858 fused_ordering(894) 00:12:38.858 fused_ordering(895) 00:12:38.858 fused_ordering(896) 00:12:38.858 fused_ordering(897) 00:12:38.858 fused_ordering(898) 00:12:38.858 fused_ordering(899) 00:12:38.858 fused_ordering(900) 00:12:38.858 fused_ordering(901) 00:12:38.858 fused_ordering(902) 00:12:38.858 fused_ordering(903) 00:12:38.858 fused_ordering(904) 00:12:38.858 fused_ordering(905) 00:12:38.858 fused_ordering(906) 00:12:38.858 fused_ordering(907) 00:12:38.858 fused_ordering(908) 00:12:38.858 fused_ordering(909) 00:12:38.858 fused_ordering(910) 00:12:38.858 fused_ordering(911) 00:12:38.858 fused_ordering(912) 00:12:38.858 fused_ordering(913) 00:12:38.858 fused_ordering(914) 00:12:38.858 fused_ordering(915) 00:12:38.858 fused_ordering(916) 00:12:38.858 fused_ordering(917) 00:12:38.858 fused_ordering(918) 00:12:38.858 fused_ordering(919) 00:12:38.858 fused_ordering(920) 00:12:38.858 fused_ordering(921) 00:12:38.858 fused_ordering(922) 00:12:38.858 fused_ordering(923) 00:12:38.858 fused_ordering(924) 00:12:38.858 fused_ordering(925) 00:12:38.858 fused_ordering(926) 00:12:38.858 fused_ordering(927) 00:12:38.858 fused_ordering(928) 00:12:38.858 fused_ordering(929) 00:12:38.858 fused_ordering(930) 00:12:38.858 fused_ordering(931) 00:12:38.858 fused_ordering(932) 00:12:38.858 fused_ordering(933) 00:12:38.858 fused_ordering(934) 00:12:38.858 fused_ordering(935) 00:12:38.858 fused_ordering(936) 00:12:38.858 fused_ordering(937) 00:12:38.858 fused_ordering(938) 00:12:38.858 fused_ordering(939) 00:12:38.858 fused_ordering(940) 00:12:38.858 fused_ordering(941) 00:12:38.858 fused_ordering(942) 00:12:38.858 fused_ordering(943) 00:12:38.858 fused_ordering(944) 00:12:38.858 fused_ordering(945) 00:12:38.858 fused_ordering(946) 00:12:38.858 fused_ordering(947) 00:12:38.858 fused_ordering(948) 00:12:38.858 fused_ordering(949) 00:12:38.858 fused_ordering(950) 00:12:38.858 fused_ordering(951) 00:12:38.858 fused_ordering(952) 00:12:38.858 fused_ordering(953) 00:12:38.858 fused_ordering(954) 00:12:38.858 fused_ordering(955) 00:12:38.858 fused_ordering(956) 00:12:38.858 fused_ordering(957) 00:12:38.858 fused_ordering(958) 00:12:38.858 fused_ordering(959) 00:12:38.858 fused_ordering(960) 00:12:38.858 fused_ordering(961) 00:12:38.858 fused_ordering(962) 00:12:38.858 fused_ordering(963) 00:12:38.858 fused_ordering(964) 00:12:38.858 fused_ordering(965) 00:12:38.858 fused_ordering(966) 00:12:38.858 fused_ordering(967) 00:12:38.858 fused_ordering(968) 00:12:38.858 fused_ordering(969) 00:12:38.858 fused_ordering(970) 00:12:38.858 fused_ordering(971) 00:12:38.858 fused_ordering(972) 00:12:38.858 fused_ordering(973) 00:12:38.858 fused_ordering(974) 00:12:38.858 fused_ordering(975) 00:12:38.858 fused_ordering(976) 00:12:38.858 fused_ordering(977) 00:12:38.858 fused_ordering(978) 00:12:38.858 fused_ordering(979) 00:12:38.858 fused_ordering(980) 00:12:38.858 fused_ordering(981) 00:12:38.858 fused_ordering(982) 00:12:38.858 fused_ordering(983) 00:12:38.858 fused_ordering(984) 00:12:38.858 fused_ordering(985) 00:12:38.858 fused_ordering(986) 00:12:38.858 fused_ordering(987) 00:12:38.858 fused_ordering(988) 00:12:38.858 fused_ordering(989) 00:12:38.858 fused_ordering(990) 00:12:38.858 fused_ordering(991) 00:12:38.858 fused_ordering(992) 00:12:38.858 fused_ordering(993) 00:12:38.858 fused_ordering(994) 00:12:38.858 fused_ordering(995) 00:12:38.858 fused_ordering(996) 00:12:38.858 fused_ordering(997) 00:12:38.858 fused_ordering(998) 00:12:38.858 fused_ordering(999) 00:12:38.858 fused_ordering(1000) 00:12:38.858 fused_ordering(1001) 00:12:38.858 fused_ordering(1002) 00:12:38.858 fused_ordering(1003) 00:12:38.858 fused_ordering(1004) 00:12:38.858 fused_ordering(1005) 00:12:38.858 fused_ordering(1006) 00:12:38.858 fused_ordering(1007) 00:12:38.858 fused_ordering(1008) 00:12:38.858 fused_ordering(1009) 00:12:38.858 fused_ordering(1010) 00:12:38.858 fused_ordering(1011) 00:12:38.858 fused_ordering(1012) 00:12:38.858 fused_ordering(1013) 00:12:38.858 fused_ordering(1014) 00:12:38.858 fused_ordering(1015) 00:12:38.858 fused_ordering(1016) 00:12:38.858 fused_ordering(1017) 00:12:38.859 fused_ordering(1018) 00:12:38.859 fused_ordering(1019) 00:12:38.859 fused_ordering(1020) 00:12:38.859 fused_ordering(1021) 00:12:38.859 fused_ordering(1022) 00:12:38.859 fused_ordering(1023) 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:38.859 rmmod nvme_tcp 00:12:38.859 rmmod nvme_fabrics 00:12:38.859 rmmod nvme_keyring 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1466309 ']' 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1466309 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1466309 ']' 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1466309 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1466309 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1466309' 00:12:38.859 killing process with pid 1466309 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1466309 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1466309 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.859 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.414 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:41.414 00:12:41.414 real 0m10.552s 00:12:41.414 user 0m4.878s 00:12:41.414 sys 0m5.775s 00:12:41.414 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.414 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.414 ************************************ 00:12:41.414 END TEST nvmf_fused_ordering 00:12:41.414 ************************************ 00:12:41.414 10:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:41.414 10:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:41.414 10:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.414 10:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:41.415 ************************************ 00:12:41.415 START TEST nvmf_ns_masking 00:12:41.415 ************************************ 00:12:41.415 10:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:41.415 * Looking for test storage... 00:12:41.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:41.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.415 --rc genhtml_branch_coverage=1 00:12:41.415 --rc genhtml_function_coverage=1 00:12:41.415 --rc genhtml_legend=1 00:12:41.415 --rc geninfo_all_blocks=1 00:12:41.415 --rc geninfo_unexecuted_blocks=1 00:12:41.415 00:12:41.415 ' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:41.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.415 --rc genhtml_branch_coverage=1 00:12:41.415 --rc genhtml_function_coverage=1 00:12:41.415 --rc genhtml_legend=1 00:12:41.415 --rc geninfo_all_blocks=1 00:12:41.415 --rc geninfo_unexecuted_blocks=1 00:12:41.415 00:12:41.415 ' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:41.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.415 --rc genhtml_branch_coverage=1 00:12:41.415 --rc genhtml_function_coverage=1 00:12:41.415 --rc genhtml_legend=1 00:12:41.415 --rc geninfo_all_blocks=1 00:12:41.415 --rc geninfo_unexecuted_blocks=1 00:12:41.415 00:12:41.415 ' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:41.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.415 --rc genhtml_branch_coverage=1 00:12:41.415 --rc genhtml_function_coverage=1 00:12:41.415 --rc genhtml_legend=1 00:12:41.415 --rc geninfo_all_blocks=1 00:12:41.415 --rc geninfo_unexecuted_blocks=1 00:12:41.415 00:12:41.415 ' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.415 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b6272c54-ae92-4790-8321-43235d9be90f 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8b4694f7-e280-4dc9-99ed-39ca6a289667 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8034076d-e939-4d07-86e5-650cb07ca7f2 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:41.416 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.984 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:47.985 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:47.985 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:47.985 Found net devices under 0000:af:00.0: cvl_0_0 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:47.985 Found net devices under 0000:af:00.1: cvl_0_1 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.985 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:12:47.985 00:12:47.985 --- 10.0.0.2 ping statistics --- 00:12:47.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.985 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:12:47.985 00:12:47.985 --- 10.0.0.1 ping statistics --- 00:12:47.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.985 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1470238 00:12:47.985 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1470238 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1470238 ']' 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.986 [2024-12-12 10:26:21.132909] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:47.986 [2024-12-12 10:26:21.132951] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.986 [2024-12-12 10:26:21.208006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.986 [2024-12-12 10:26:21.248960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.986 [2024-12-12 10:26:21.248997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.986 [2024-12-12 10:26:21.249007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.986 [2024-12-12 10:26:21.249016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.986 [2024-12-12 10:26:21.249022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.986 [2024-12-12 10:26:21.249530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:47.986 [2024-12-12 10:26:21.542790] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:47.986 Malloc1 00:12:47.986 10:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:47.986 Malloc2 00:12:48.244 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:48.244 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:48.502 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.760 [2024-12-12 10:26:22.580589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.760 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:48.760 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8034076d-e939-4d07-86e5-650cb07ca7f2 -a 10.0.0.2 -s 4420 -i 4 00:12:49.019 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.019 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:49.019 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.019 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:49.019 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:50.919 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:51.179 [ 0]:0x1 00:12:51.179 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.179 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.179 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2fdd6cf6f984ce6b82cfe7c3c12aad1 00:12:51.179 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2fdd6cf6f984ce6b82cfe7c3c12aad1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.179 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:51.179 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:51.179 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.179 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:51.441 [ 0]:0x1 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2fdd6cf6f984ce6b82cfe7c3c12aad1 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2fdd6cf6f984ce6b82cfe7c3c12aad1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.441 [ 1]:0x2 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bb9dd633d162462a901e0206e0b7bf17 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bb9dd633d162462a901e0206e0b7bf17 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.441 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.700 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:51.958 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:51.958 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8034076d-e939-4d07-86e5-650cb07ca7f2 -a 10.0.0.2 -s 4420 -i 4 00:12:51.958 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:51.958 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:51.958 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.958 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:51.958 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:51.958 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.488 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.488 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.488 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.488 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:54.488 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.488 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:54.488 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:54.488 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.488 [ 0]:0x2 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bb9dd633d162462a901e0206e0b7bf17 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bb9dd633d162462a901e0206e0b7bf17 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.488 [ 0]:0x1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2fdd6cf6f984ce6b82cfe7c3c12aad1 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2fdd6cf6f984ce6b82cfe7c3c12aad1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.488 [ 1]:0x2 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bb9dd633d162462a901e0206e0b7bf17 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bb9dd633d162462a901e0206e0b7bf17 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.488 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.746 [ 0]:0x2 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bb9dd633d162462a901e0206e0b7bf17 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bb9dd633d162462a901e0206e0b7bf17 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:54.746 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.004 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:55.004 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:55.004 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8034076d-e939-4d07-86e5-650cb07ca7f2 -a 10.0.0.2 -s 4420 -i 4 00:12:55.263 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:55.263 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:55.263 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.263 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:55.263 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:55.263 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:57.167 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:57.167 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:57.167 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.167 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:57.167 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.167 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:57.167 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:57.167 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.428 [ 0]:0x1 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2fdd6cf6f984ce6b82cfe7c3c12aad1 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2fdd6cf6f984ce6b82cfe7c3c12aad1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:57.428 [ 1]:0x2 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bb9dd633d162462a901e0206e0b7bf17 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bb9dd633d162462a901e0206e0b7bf17 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.428 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:57.687 [ 0]:0x2 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:57.687 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bb9dd633d162462a901e0206e0b7bf17 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bb9dd633d162462a901e0206e0b7bf17 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:57.946 [2024-12-12 10:26:31.915076] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:57.946 request: 00:12:57.946 { 00:12:57.946 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.946 "nsid": 2, 00:12:57.946 "host": "nqn.2016-06.io.spdk:host1", 00:12:57.946 "method": "nvmf_ns_remove_host", 00:12:57.946 "req_id": 1 00:12:57.946 } 00:12:57.946 Got JSON-RPC error response 00:12:57.946 response: 00:12:57.946 { 00:12:57.946 "code": -32602, 00:12:57.946 "message": "Invalid parameters" 00:12:57.946 } 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.946 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.205 [ 0]:0x2 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.205 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bb9dd633d162462a901e0206e0b7bf17 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bb9dd633d162462a901e0206e0b7bf17 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1472191 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1472191 /var/tmp/host.sock 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1472191 ']' 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:58.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.205 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 [2024-12-12 10:26:32.133923] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:12:58.205 [2024-12-12 10:26:32.133971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472191 ] 00:12:58.205 [2024-12-12 10:26:32.207037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.464 [2024-12-12 10:26:32.248668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.464 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.464 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:58.464 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.722 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:58.980 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b6272c54-ae92-4790-8321-43235d9be90f 00:12:58.980 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:58.980 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B6272C54AE924790832143235D9BE90F -i 00:12:59.239 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8b4694f7-e280-4dc9-99ed-39ca6a289667 00:12:59.239 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:59.239 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8B4694F7E2804DC999ED39CA6A289667 -i 00:12:59.239 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:59.497 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:59.755 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:59.755 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:00.013 nvme0n1 00:13:00.014 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:00.014 10:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:00.272 nvme1n2 00:13:00.272 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:00.272 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:00.272 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:00.272 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:00.272 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:00.530 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:00.530 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:00.530 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:00.530 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:00.789 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b6272c54-ae92-4790-8321-43235d9be90f == \b\6\2\7\2\c\5\4\-\a\e\9\2\-\4\7\9\0\-\8\3\2\1\-\4\3\2\3\5\d\9\b\e\9\0\f ]] 00:13:00.789 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:00.789 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:00.789 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:01.047 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8b4694f7-e280-4dc9-99ed-39ca6a289667 == \8\b\4\6\9\4\f\7\-\e\2\8\0\-\4\d\c\9\-\9\9\e\d\-\3\9\c\a\6\a\2\8\9\6\6\7 ]] 00:13:01.047 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.047 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid b6272c54-ae92-4790-8321-43235d9be90f 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B6272C54AE924790832143235D9BE90F 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B6272C54AE924790832143235D9BE90F 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:01.308 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B6272C54AE924790832143235D9BE90F 00:13:01.567 [2024-12-12 10:26:35.412720] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:01.567 [2024-12-12 10:26:35.412752] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:01.567 [2024-12-12 10:26:35.412760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.567 request: 00:13:01.567 { 00:13:01.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.567 "namespace": { 00:13:01.567 "bdev_name": "invalid", 00:13:01.567 "nsid": 1, 00:13:01.567 "nguid": "B6272C54AE924790832143235D9BE90F", 00:13:01.567 "no_auto_visible": false, 00:13:01.567 "hide_metadata": false 00:13:01.567 }, 00:13:01.567 "method": "nvmf_subsystem_add_ns", 00:13:01.567 "req_id": 1 00:13:01.567 } 00:13:01.567 Got JSON-RPC error response 00:13:01.567 response: 00:13:01.567 { 00:13:01.567 "code": -32602, 00:13:01.567 "message": "Invalid parameters" 00:13:01.567 } 00:13:01.567 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:01.567 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.567 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.567 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.567 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid b6272c54-ae92-4790-8321-43235d9be90f 00:13:01.567 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:01.567 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B6272C54AE924790832143235D9BE90F -i 00:13:01.825 10:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:03.727 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:03.727 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:03.727 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1472191 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1472191 ']' 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1472191 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1472191 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1472191' 00:13:03.986 killing process with pid 1472191 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1472191 00:13:03.986 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1472191 00:13:04.245 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.503 rmmod nvme_tcp 00:13:04.503 rmmod nvme_fabrics 00:13:04.503 rmmod nvme_keyring 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1470238 ']' 00:13:04.503 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1470238 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1470238 ']' 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1470238 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1470238 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1470238' 00:13:04.504 killing process with pid 1470238 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1470238 00:13:04.504 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1470238 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.762 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.297 00:13:07.297 real 0m25.852s 00:13:07.297 user 0m30.893s 00:13:07.297 sys 0m6.986s 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.297 ************************************ 00:13:07.297 END TEST nvmf_ns_masking 00:13:07.297 ************************************ 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.297 ************************************ 00:13:07.297 START TEST nvmf_nvme_cli 00:13:07.297 ************************************ 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:07.297 * Looking for test storage... 00:13:07.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:07.297 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:07.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.297 --rc genhtml_branch_coverage=1 00:13:07.297 --rc genhtml_function_coverage=1 00:13:07.297 --rc genhtml_legend=1 00:13:07.297 --rc geninfo_all_blocks=1 00:13:07.297 --rc geninfo_unexecuted_blocks=1 00:13:07.297 00:13:07.297 ' 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:07.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.297 --rc genhtml_branch_coverage=1 00:13:07.297 --rc genhtml_function_coverage=1 00:13:07.297 --rc genhtml_legend=1 00:13:07.297 --rc geninfo_all_blocks=1 00:13:07.297 --rc geninfo_unexecuted_blocks=1 00:13:07.297 00:13:07.297 ' 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:07.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.297 --rc genhtml_branch_coverage=1 00:13:07.297 --rc genhtml_function_coverage=1 00:13:07.297 --rc genhtml_legend=1 00:13:07.297 --rc geninfo_all_blocks=1 00:13:07.297 --rc geninfo_unexecuted_blocks=1 00:13:07.297 00:13:07.297 ' 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:07.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.297 --rc genhtml_branch_coverage=1 00:13:07.297 --rc genhtml_function_coverage=1 00:13:07.297 --rc genhtml_legend=1 00:13:07.297 --rc geninfo_all_blocks=1 00:13:07.297 --rc geninfo_unexecuted_blocks=1 00:13:07.297 00:13:07.297 ' 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.297 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.298 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:13.863 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:13.864 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:13.864 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:13.864 Found net devices under 0000:af:00.0: cvl_0_0 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:13.864 Found net devices under 0000:af:00.1: cvl_0_1 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:13.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:13:13.864 00:13:13.864 --- 10.0.0.2 ping statistics --- 00:13:13.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.864 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:13:13.864 00:13:13.864 --- 10.0.0.1 ping statistics --- 00:13:13.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.864 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:13.864 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1476814 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1476814 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1476814 ']' 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.864 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.864 [2024-12-12 10:26:47.065961] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:13:13.864 [2024-12-12 10:26:47.066009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.864 [2024-12-12 10:26:47.144801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.864 [2024-12-12 10:26:47.187241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.864 [2024-12-12 10:26:47.187280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.864 [2024-12-12 10:26:47.187288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.864 [2024-12-12 10:26:47.187294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.864 [2024-12-12 10:26:47.187299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.864 [2024-12-12 10:26:47.188777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.864 [2024-12-12 10:26:47.188888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.865 [2024-12-12 10:26:47.188969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.865 [2024-12-12 10:26:47.188970] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.865 [2024-12-12 10:26:47.330552] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.865 Malloc0 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.865 Malloc1 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.865 [2024-12-12 10:26:47.420598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:13.865 00:13:13.865 Discovery Log Number of Records 2, Generation counter 2 00:13:13.865 =====Discovery Log Entry 0====== 00:13:13.865 trtype: tcp 00:13:13.865 adrfam: ipv4 00:13:13.865 subtype: current discovery subsystem 00:13:13.865 treq: not required 00:13:13.865 portid: 0 00:13:13.865 trsvcid: 4420 00:13:13.865 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:13.865 traddr: 10.0.0.2 00:13:13.865 eflags: explicit discovery connections, duplicate discovery information 00:13:13.865 sectype: none 00:13:13.865 =====Discovery Log Entry 1====== 00:13:13.865 trtype: tcp 00:13:13.865 adrfam: ipv4 00:13:13.865 subtype: nvme subsystem 00:13:13.865 treq: not required 00:13:13.865 portid: 0 00:13:13.865 trsvcid: 4420 00:13:13.865 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:13.865 traddr: 10.0.0.2 00:13:13.865 eflags: none 00:13:13.865 sectype: none 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:13.865 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.800 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:14.800 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:14.800 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.801 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:14.801 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:14.801 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:17.331 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:17.331 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:17.331 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:17.332 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:17.332 /dev/nvme0n2 ]] 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:17.332 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.591 rmmod nvme_tcp 00:13:17.591 rmmod nvme_fabrics 00:13:17.591 rmmod nvme_keyring 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1476814 ']' 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1476814 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1476814 ']' 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1476814 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1476814 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1476814' 00:13:17.591 killing process with pid 1476814 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1476814 00:13:17.591 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1476814 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.850 10:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.386 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.386 00:13:20.386 real 0m13.011s 00:13:20.386 user 0m20.083s 00:13:20.386 sys 0m5.063s 00:13:20.386 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.386 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.386 ************************************ 00:13:20.386 END TEST nvmf_nvme_cli 00:13:20.386 ************************************ 00:13:20.386 10:26:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:20.386 10:26:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:20.386 10:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.386 10:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.386 10:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.386 ************************************ 00:13:20.386 START TEST nvmf_vfio_user 00:13:20.386 ************************************ 00:13:20.386 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:20.386 * Looking for test storage... 00:13:20.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:20.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.386 --rc genhtml_branch_coverage=1 00:13:20.386 --rc genhtml_function_coverage=1 00:13:20.386 --rc genhtml_legend=1 00:13:20.386 --rc geninfo_all_blocks=1 00:13:20.386 --rc geninfo_unexecuted_blocks=1 00:13:20.386 00:13:20.386 ' 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:20.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.386 --rc genhtml_branch_coverage=1 00:13:20.386 --rc genhtml_function_coverage=1 00:13:20.386 --rc genhtml_legend=1 00:13:20.386 --rc geninfo_all_blocks=1 00:13:20.386 --rc geninfo_unexecuted_blocks=1 00:13:20.386 00:13:20.386 ' 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:20.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.386 --rc genhtml_branch_coverage=1 00:13:20.386 --rc genhtml_function_coverage=1 00:13:20.386 --rc genhtml_legend=1 00:13:20.386 --rc geninfo_all_blocks=1 00:13:20.386 --rc geninfo_unexecuted_blocks=1 00:13:20.386 00:13:20.386 ' 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:20.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.386 --rc genhtml_branch_coverage=1 00:13:20.386 --rc genhtml_function_coverage=1 00:13:20.386 --rc genhtml_legend=1 00:13:20.386 --rc geninfo_all_blocks=1 00:13:20.386 --rc geninfo_unexecuted_blocks=1 00:13:20.386 00:13:20.386 ' 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.386 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1478084 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1478084' 00:13:20.387 Process pid: 1478084 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1478084 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1478084 ']' 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.387 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:20.387 [2024-12-12 10:26:54.213177] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:13:20.387 [2024-12-12 10:26:54.213218] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.387 [2024-12-12 10:26:54.284171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.387 [2024-12-12 10:26:54.323308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.387 [2024-12-12 10:26:54.323346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.387 [2024-12-12 10:26:54.323353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.387 [2024-12-12 10:26:54.323359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.387 [2024-12-12 10:26:54.323364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.387 [2024-12-12 10:26:54.324843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.387 [2024-12-12 10:26:54.324952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.387 [2024-12-12 10:26:54.325061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.387 [2024-12-12 10:26:54.325062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.645 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.645 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:20.645 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:21.582 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:21.840 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:21.840 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:21.840 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:21.840 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:21.840 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:21.840 Malloc1 00:13:21.840 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:22.098 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:22.356 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:22.615 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:22.615 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:22.615 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:22.873 Malloc2 00:13:22.873 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:22.873 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:23.131 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:23.391 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:23.391 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:23.391 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:23.391 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:23.391 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:23.391 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:23.391 [2024-12-12 10:26:57.304114] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:13:23.391 [2024-12-12 10:26:57.304147] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478553 ] 00:13:23.391 [2024-12-12 10:26:57.341482] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:23.391 [2024-12-12 10:26:57.349877] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:23.391 [2024-12-12 10:26:57.349899] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f185036f000 00:13:23.391 [2024-12-12 10:26:57.350876] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.391 [2024-12-12 10:26:57.351877] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.391 [2024-12-12 10:26:57.352884] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.391 [2024-12-12 10:26:57.353892] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.391 [2024-12-12 10:26:57.354896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.391 [2024-12-12 10:26:57.355903] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.391 [2024-12-12 10:26:57.356907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.391 [2024-12-12 10:26:57.357911] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.391 [2024-12-12 10:26:57.358920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:23.391 [2024-12-12 10:26:57.358929] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1850364000 00:13:23.391 [2024-12-12 10:26:57.359844] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:23.391 [2024-12-12 10:26:57.369288] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:23.391 [2024-12-12 10:26:57.369316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:23.391 [2024-12-12 10:26:57.374016] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:23.391 [2024-12-12 10:26:57.374052] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:23.391 [2024-12-12 10:26:57.374126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:23.391 [2024-12-12 10:26:57.374144] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:23.391 [2024-12-12 10:26:57.374149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:23.391 [2024-12-12 10:26:57.375016] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:23.391 [2024-12-12 10:26:57.375025] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:23.391 [2024-12-12 10:26:57.375031] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:23.391 [2024-12-12 10:26:57.376020] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:23.391 [2024-12-12 10:26:57.376028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:23.391 [2024-12-12 10:26:57.376035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:23.391 [2024-12-12 10:26:57.377029] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:23.391 [2024-12-12 10:26:57.377036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:23.391 [2024-12-12 10:26:57.378032] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:23.391 [2024-12-12 10:26:57.378040] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:23.391 [2024-12-12 10:26:57.378044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:23.391 [2024-12-12 10:26:57.378052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:23.391 [2024-12-12 10:26:57.378159] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:23.391 [2024-12-12 10:26:57.378164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:23.391 [2024-12-12 10:26:57.378169] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:23.391 [2024-12-12 10:26:57.379041] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:23.391 [2024-12-12 10:26:57.380046] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:23.391 [2024-12-12 10:26:57.381053] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:23.391 [2024-12-12 10:26:57.382055] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:23.391 [2024-12-12 10:26:57.382116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:23.391 [2024-12-12 10:26:57.383068] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:23.391 [2024-12-12 10:26:57.383075] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:23.391 [2024-12-12 10:26:57.383080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:23.391 [2024-12-12 10:26:57.383096] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:23.391 [2024-12-12 10:26:57.383105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:23.391 [2024-12-12 10:26:57.383118] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.391 [2024-12-12 10:26:57.383122] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.391 [2024-12-12 10:26:57.383126] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.391 [2024-12-12 10:26:57.383139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.391 [2024-12-12 10:26:57.383178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:23.391 [2024-12-12 10:26:57.383188] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:23.391 [2024-12-12 10:26:57.383192] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:23.391 [2024-12-12 10:26:57.383196] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:23.391 [2024-12-12 10:26:57.383200] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:23.391 [2024-12-12 10:26:57.383205] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:23.391 [2024-12-12 10:26:57.383209] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:23.391 [2024-12-12 10:26:57.383213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:23.391 [2024-12-12 10:26:57.383223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:23.391 [2024-12-12 10:26:57.383233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.392 [2024-12-12 10:26:57.383259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.392 [2024-12-12 10:26:57.383266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.392 [2024-12-12 10:26:57.383273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.392 [2024-12-12 10:26:57.383277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383310] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:23.392 [2024-12-12 10:26:57.383314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383411] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:23.392 [2024-12-12 10:26:57.383415] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:23.392 [2024-12-12 10:26:57.383418] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.392 [2024-12-12 10:26:57.383423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383441] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:23.392 [2024-12-12 10:26:57.383453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383466] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.392 [2024-12-12 10:26:57.383469] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.392 [2024-12-12 10:26:57.383472] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.392 [2024-12-12 10:26:57.383477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383524] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.392 [2024-12-12 10:26:57.383527] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.392 [2024-12-12 10:26:57.383530] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.392 [2024-12-12 10:26:57.383536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383587] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:23.392 [2024-12-12 10:26:57.383591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:23.392 [2024-12-12 10:26:57.383596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:23.392 [2024-12-12 10:26:57.383612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383687] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:23.392 [2024-12-12 10:26:57.383691] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:23.392 [2024-12-12 10:26:57.383694] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:23.392 [2024-12-12 10:26:57.383697] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:23.392 [2024-12-12 10:26:57.383700] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:23.392 [2024-12-12 10:26:57.383706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:23.392 [2024-12-12 10:26:57.383712] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:23.392 [2024-12-12 10:26:57.383716] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:23.392 [2024-12-12 10:26:57.383719] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.392 [2024-12-12 10:26:57.383724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383730] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:23.392 [2024-12-12 10:26:57.383734] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.392 [2024-12-12 10:26:57.383737] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.392 [2024-12-12 10:26:57.383742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383749] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:23.392 [2024-12-12 10:26:57.383752] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:23.392 [2024-12-12 10:26:57.383755] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.392 [2024-12-12 10:26:57.383761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:23.392 [2024-12-12 10:26:57.383767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:23.392 [2024-12-12 10:26:57.383793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:23.392 ===================================================== 00:13:23.392 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:23.392 ===================================================== 00:13:23.392 Controller Capabilities/Features 00:13:23.392 ================================ 00:13:23.392 Vendor ID: 4e58 00:13:23.392 Subsystem Vendor ID: 4e58 00:13:23.392 Serial Number: SPDK1 00:13:23.392 Model Number: SPDK bdev Controller 00:13:23.392 Firmware Version: 25.01 00:13:23.392 Recommended Arb Burst: 6 00:13:23.392 IEEE OUI Identifier: 8d 6b 50 00:13:23.392 Multi-path I/O 00:13:23.392 May have multiple subsystem ports: Yes 00:13:23.392 May have multiple controllers: Yes 00:13:23.392 Associated with SR-IOV VF: No 00:13:23.393 Max Data Transfer Size: 131072 00:13:23.393 Max Number of Namespaces: 32 00:13:23.393 Max Number of I/O Queues: 127 00:13:23.393 NVMe Specification Version (VS): 1.3 00:13:23.393 NVMe Specification Version (Identify): 1.3 00:13:23.393 Maximum Queue Entries: 256 00:13:23.393 Contiguous Queues Required: Yes 00:13:23.393 Arbitration Mechanisms Supported 00:13:23.393 Weighted Round Robin: Not Supported 00:13:23.393 Vendor Specific: Not Supported 00:13:23.393 Reset Timeout: 15000 ms 00:13:23.393 Doorbell Stride: 4 bytes 00:13:23.393 NVM Subsystem Reset: Not Supported 00:13:23.393 Command Sets Supported 00:13:23.393 NVM Command Set: Supported 00:13:23.393 Boot Partition: Not Supported 00:13:23.393 Memory Page Size Minimum: 4096 bytes 00:13:23.393 Memory Page Size Maximum: 4096 bytes 00:13:23.393 Persistent Memory Region: Not Supported 00:13:23.393 Optional Asynchronous Events Supported 00:13:23.393 Namespace Attribute Notices: Supported 00:13:23.393 Firmware Activation Notices: Not Supported 00:13:23.393 ANA Change Notices: Not Supported 00:13:23.393 PLE Aggregate Log Change Notices: Not Supported 00:13:23.393 LBA Status Info Alert Notices: Not Supported 00:13:23.393 EGE Aggregate Log Change Notices: Not Supported 00:13:23.393 Normal NVM Subsystem Shutdown event: Not Supported 00:13:23.393 Zone Descriptor Change Notices: Not Supported 00:13:23.393 Discovery Log Change Notices: Not Supported 00:13:23.393 Controller Attributes 00:13:23.393 128-bit Host Identifier: Supported 00:13:23.393 Non-Operational Permissive Mode: Not Supported 00:13:23.393 NVM Sets: Not Supported 00:13:23.393 Read Recovery Levels: Not Supported 00:13:23.393 Endurance Groups: Not Supported 00:13:23.393 Predictable Latency Mode: Not Supported 00:13:23.393 Traffic Based Keep ALive: Not Supported 00:13:23.393 Namespace Granularity: Not Supported 00:13:23.393 SQ Associations: Not Supported 00:13:23.393 UUID List: Not Supported 00:13:23.393 Multi-Domain Subsystem: Not Supported 00:13:23.393 Fixed Capacity Management: Not Supported 00:13:23.393 Variable Capacity Management: Not Supported 00:13:23.393 Delete Endurance Group: Not Supported 00:13:23.393 Delete NVM Set: Not Supported 00:13:23.393 Extended LBA Formats Supported: Not Supported 00:13:23.393 Flexible Data Placement Supported: Not Supported 00:13:23.393 00:13:23.393 Controller Memory Buffer Support 00:13:23.393 ================================ 00:13:23.393 Supported: No 00:13:23.393 00:13:23.393 Persistent Memory Region Support 00:13:23.393 ================================ 00:13:23.393 Supported: No 00:13:23.393 00:13:23.393 Admin Command Set Attributes 00:13:23.393 ============================ 00:13:23.393 Security Send/Receive: Not Supported 00:13:23.393 Format NVM: Not Supported 00:13:23.393 Firmware Activate/Download: Not Supported 00:13:23.393 Namespace Management: Not Supported 00:13:23.393 Device Self-Test: Not Supported 00:13:23.393 Directives: Not Supported 00:13:23.393 NVMe-MI: Not Supported 00:13:23.393 Virtualization Management: Not Supported 00:13:23.393 Doorbell Buffer Config: Not Supported 00:13:23.393 Get LBA Status Capability: Not Supported 00:13:23.393 Command & Feature Lockdown Capability: Not Supported 00:13:23.393 Abort Command Limit: 4 00:13:23.393 Async Event Request Limit: 4 00:13:23.393 Number of Firmware Slots: N/A 00:13:23.393 Firmware Slot 1 Read-Only: N/A 00:13:23.393 Firmware Activation Without Reset: N/A 00:13:23.393 Multiple Update Detection Support: N/A 00:13:23.393 Firmware Update Granularity: No Information Provided 00:13:23.393 Per-Namespace SMART Log: No 00:13:23.393 Asymmetric Namespace Access Log Page: Not Supported 00:13:23.393 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:23.393 Command Effects Log Page: Supported 00:13:23.393 Get Log Page Extended Data: Supported 00:13:23.393 Telemetry Log Pages: Not Supported 00:13:23.393 Persistent Event Log Pages: Not Supported 00:13:23.393 Supported Log Pages Log Page: May Support 00:13:23.393 Commands Supported & Effects Log Page: Not Supported 00:13:23.393 Feature Identifiers & Effects Log Page:May Support 00:13:23.393 NVMe-MI Commands & Effects Log Page: May Support 00:13:23.393 Data Area 4 for Telemetry Log: Not Supported 00:13:23.393 Error Log Page Entries Supported: 128 00:13:23.393 Keep Alive: Supported 00:13:23.393 Keep Alive Granularity: 10000 ms 00:13:23.393 00:13:23.393 NVM Command Set Attributes 00:13:23.393 ========================== 00:13:23.393 Submission Queue Entry Size 00:13:23.393 Max: 64 00:13:23.393 Min: 64 00:13:23.393 Completion Queue Entry Size 00:13:23.393 Max: 16 00:13:23.393 Min: 16 00:13:23.393 Number of Namespaces: 32 00:13:23.393 Compare Command: Supported 00:13:23.393 Write Uncorrectable Command: Not Supported 00:13:23.393 Dataset Management Command: Supported 00:13:23.393 Write Zeroes Command: Supported 00:13:23.393 Set Features Save Field: Not Supported 00:13:23.393 Reservations: Not Supported 00:13:23.393 Timestamp: Not Supported 00:13:23.393 Copy: Supported 00:13:23.393 Volatile Write Cache: Present 00:13:23.393 Atomic Write Unit (Normal): 1 00:13:23.393 Atomic Write Unit (PFail): 1 00:13:23.393 Atomic Compare & Write Unit: 1 00:13:23.393 Fused Compare & Write: Supported 00:13:23.393 Scatter-Gather List 00:13:23.393 SGL Command Set: Supported (Dword aligned) 00:13:23.393 SGL Keyed: Not Supported 00:13:23.393 SGL Bit Bucket Descriptor: Not Supported 00:13:23.393 SGL Metadata Pointer: Not Supported 00:13:23.393 Oversized SGL: Not Supported 00:13:23.393 SGL Metadata Address: Not Supported 00:13:23.393 SGL Offset: Not Supported 00:13:23.393 Transport SGL Data Block: Not Supported 00:13:23.393 Replay Protected Memory Block: Not Supported 00:13:23.393 00:13:23.393 Firmware Slot Information 00:13:23.393 ========================= 00:13:23.393 Active slot: 1 00:13:23.393 Slot 1 Firmware Revision: 25.01 00:13:23.393 00:13:23.393 00:13:23.393 Commands Supported and Effects 00:13:23.393 ============================== 00:13:23.393 Admin Commands 00:13:23.393 -------------- 00:13:23.393 Get Log Page (02h): Supported 00:13:23.393 Identify (06h): Supported 00:13:23.393 Abort (08h): Supported 00:13:23.393 Set Features (09h): Supported 00:13:23.393 Get Features (0Ah): Supported 00:13:23.393 Asynchronous Event Request (0Ch): Supported 00:13:23.393 Keep Alive (18h): Supported 00:13:23.393 I/O Commands 00:13:23.393 ------------ 00:13:23.393 Flush (00h): Supported LBA-Change 00:13:23.393 Write (01h): Supported LBA-Change 00:13:23.393 Read (02h): Supported 00:13:23.393 Compare (05h): Supported 00:13:23.393 Write Zeroes (08h): Supported LBA-Change 00:13:23.393 Dataset Management (09h): Supported LBA-Change 00:13:23.393 Copy (19h): Supported LBA-Change 00:13:23.393 00:13:23.393 Error Log 00:13:23.393 ========= 00:13:23.393 00:13:23.393 Arbitration 00:13:23.393 =========== 00:13:23.393 Arbitration Burst: 1 00:13:23.393 00:13:23.393 Power Management 00:13:23.393 ================ 00:13:23.393 Number of Power States: 1 00:13:23.393 Current Power State: Power State #0 00:13:23.393 Power State #0: 00:13:23.393 Max Power: 0.00 W 00:13:23.393 Non-Operational State: Operational 00:13:23.393 Entry Latency: Not Reported 00:13:23.393 Exit Latency: Not Reported 00:13:23.393 Relative Read Throughput: 0 00:13:23.393 Relative Read Latency: 0 00:13:23.393 Relative Write Throughput: 0 00:13:23.393 Relative Write Latency: 0 00:13:23.393 Idle Power: Not Reported 00:13:23.393 Active Power: Not Reported 00:13:23.393 Non-Operational Permissive Mode: Not Supported 00:13:23.393 00:13:23.393 Health Information 00:13:23.393 ================== 00:13:23.393 Critical Warnings: 00:13:23.393 Available Spare Space: OK 00:13:23.393 Temperature: OK 00:13:23.393 Device Reliability: OK 00:13:23.393 Read Only: No 00:13:23.393 Volatile Memory Backup: OK 00:13:23.393 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:23.393 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:23.393 Available Spare: 0% 00:13:23.393 Available Sp[2024-12-12 10:26:57.383877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:23.393 [2024-12-12 10:26:57.383886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:23.393 [2024-12-12 10:26:57.383911] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:23.393 [2024-12-12 10:26:57.383920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.393 [2024-12-12 10:26:57.383925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.393 [2024-12-12 10:26:57.383930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.393 [2024-12-12 10:26:57.383936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.393 [2024-12-12 10:26:57.384078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:23.393 [2024-12-12 10:26:57.384089] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:23.394 [2024-12-12 10:26:57.385083] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:23.394 [2024-12-12 10:26:57.385128] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:23.394 [2024-12-12 10:26:57.385134] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:23.394 [2024-12-12 10:26:57.386084] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:23.394 [2024-12-12 10:26:57.386095] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:23.394 [2024-12-12 10:26:57.386147] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:23.394 [2024-12-12 10:26:57.388575] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:23.652 are Threshold: 0% 00:13:23.652 Life Percentage Used: 0% 00:13:23.652 Data Units Read: 0 00:13:23.652 Data Units Written: 0 00:13:23.652 Host Read Commands: 0 00:13:23.652 Host Write Commands: 0 00:13:23.652 Controller Busy Time: 0 minutes 00:13:23.652 Power Cycles: 0 00:13:23.652 Power On Hours: 0 hours 00:13:23.652 Unsafe Shutdowns: 0 00:13:23.652 Unrecoverable Media Errors: 0 00:13:23.652 Lifetime Error Log Entries: 0 00:13:23.652 Warning Temperature Time: 0 minutes 00:13:23.652 Critical Temperature Time: 0 minutes 00:13:23.652 00:13:23.652 Number of Queues 00:13:23.652 ================ 00:13:23.652 Number of I/O Submission Queues: 127 00:13:23.652 Number of I/O Completion Queues: 127 00:13:23.652 00:13:23.652 Active Namespaces 00:13:23.652 ================= 00:13:23.652 Namespace ID:1 00:13:23.652 Error Recovery Timeout: Unlimited 00:13:23.652 Command Set Identifier: NVM (00h) 00:13:23.652 Deallocate: Supported 00:13:23.652 Deallocated/Unwritten Error: Not Supported 00:13:23.652 Deallocated Read Value: Unknown 00:13:23.652 Deallocate in Write Zeroes: Not Supported 00:13:23.652 Deallocated Guard Field: 0xFFFF 00:13:23.652 Flush: Supported 00:13:23.652 Reservation: Supported 00:13:23.652 Namespace Sharing Capabilities: Multiple Controllers 00:13:23.652 Size (in LBAs): 131072 (0GiB) 00:13:23.652 Capacity (in LBAs): 131072 (0GiB) 00:13:23.652 Utilization (in LBAs): 131072 (0GiB) 00:13:23.652 NGUID: 4B2809CBAA3A446ABC9964338BEA352E 00:13:23.652 UUID: 4b2809cb-aa3a-446a-bc99-64338bea352e 00:13:23.652 Thin Provisioning: Not Supported 00:13:23.652 Per-NS Atomic Units: Yes 00:13:23.652 Atomic Boundary Size (Normal): 0 00:13:23.652 Atomic Boundary Size (PFail): 0 00:13:23.652 Atomic Boundary Offset: 0 00:13:23.652 Maximum Single Source Range Length: 65535 00:13:23.652 Maximum Copy Length: 65535 00:13:23.652 Maximum Source Range Count: 1 00:13:23.652 NGUID/EUI64 Never Reused: No 00:13:23.652 Namespace Write Protected: No 00:13:23.652 Number of LBA Formats: 1 00:13:23.652 Current LBA Format: LBA Format #00 00:13:23.652 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:23.652 00:13:23.652 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:23.652 [2024-12-12 10:26:57.597030] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:28.917 Initializing NVMe Controllers 00:13:28.917 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:28.917 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:28.917 Initialization complete. Launching workers. 00:13:28.917 ======================================================== 00:13:28.917 Latency(us) 00:13:28.917 Device Information : IOPS MiB/s Average min max 00:13:28.917 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39946.71 156.04 3203.88 979.37 6660.42 00:13:28.917 ======================================================== 00:13:28.917 Total : 39946.71 156.04 3203.88 979.37 6660.42 00:13:28.917 00:13:28.917 [2024-12-12 10:27:02.618405] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:28.917 10:27:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:28.917 [2024-12-12 10:27:02.853497] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:34.182 Initializing NVMe Controllers 00:13:34.182 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:34.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:34.182 Initialization complete. Launching workers. 00:13:34.182 ======================================================== 00:13:34.182 Latency(us) 00:13:34.182 Device Information : IOPS MiB/s Average min max 00:13:34.182 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.47 62.71 7978.21 4989.28 9978.46 00:13:34.182 ======================================================== 00:13:34.182 Total : 16054.47 62.71 7978.21 4989.28 9978.46 00:13:34.182 00:13:34.182 [2024-12-12 10:27:07.892408] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.182 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:34.182 [2024-12-12 10:27:08.105366] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:39.495 [2024-12-12 10:27:13.167823] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:39.495 Initializing NVMe Controllers 00:13:39.495 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:39.495 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:39.495 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:39.495 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:39.495 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:39.495 Initialization complete. Launching workers. 00:13:39.495 Starting thread on core 2 00:13:39.495 Starting thread on core 3 00:13:39.495 Starting thread on core 1 00:13:39.495 10:27:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:39.495 [2024-12-12 10:27:13.466973] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.830 [2024-12-12 10:27:16.539092] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:42.830 Initializing NVMe Controllers 00:13:42.830 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.830 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.830 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:42.830 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:42.830 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:42.830 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:42.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:42.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:42.830 Initialization complete. Launching workers. 00:13:42.830 Starting thread on core 1 with urgent priority queue 00:13:42.830 Starting thread on core 2 with urgent priority queue 00:13:42.830 Starting thread on core 3 with urgent priority queue 00:13:42.830 Starting thread on core 0 with urgent priority queue 00:13:42.830 SPDK bdev Controller (SPDK1 ) core 0: 8205.00 IO/s 12.19 secs/100000 ios 00:13:42.830 SPDK bdev Controller (SPDK1 ) core 1: 8010.00 IO/s 12.48 secs/100000 ios 00:13:42.830 SPDK bdev Controller (SPDK1 ) core 2: 8646.67 IO/s 11.57 secs/100000 ios 00:13:42.830 SPDK bdev Controller (SPDK1 ) core 3: 9427.33 IO/s 10.61 secs/100000 ios 00:13:42.830 ======================================================== 00:13:42.830 00:13:42.830 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:42.830 [2024-12-12 10:27:16.830083] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:43.089 Initializing NVMe Controllers 00:13:43.089 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:43.089 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:43.089 Namespace ID: 1 size: 0GB 00:13:43.089 Initialization complete. 00:13:43.089 INFO: using host memory buffer for IO 00:13:43.089 Hello world! 00:13:43.089 [2024-12-12 10:27:16.866340] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:43.089 10:27:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:43.347 [2024-12-12 10:27:17.149022] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.282 Initializing NVMe Controllers 00:13:44.282 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.282 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.282 Initialization complete. Launching workers. 00:13:44.282 submit (in ns) avg, min, max = 6774.0, 3172.4, 4000116.2 00:13:44.282 complete (in ns) avg, min, max = 21559.5, 1820.0, 5993025.7 00:13:44.282 00:13:44.282 Submit histogram 00:13:44.282 ================ 00:13:44.282 Range in us Cumulative Count 00:13:44.282 3.170 - 3.185: 0.1589% ( 26) 00:13:44.282 3.185 - 3.200: 1.5094% ( 221) 00:13:44.282 3.200 - 3.215: 5.7749% ( 698) 00:13:44.282 3.215 - 3.230: 11.8186% ( 989) 00:13:44.282 3.230 - 3.246: 17.4407% ( 920) 00:13:44.282 3.246 - 3.261: 24.7983% ( 1204) 00:13:44.282 3.261 - 3.276: 32.1987% ( 1211) 00:13:44.282 3.276 - 3.291: 37.7536% ( 909) 00:13:44.282 3.291 - 3.307: 43.1557% ( 884) 00:13:44.282 3.307 - 3.322: 48.5334% ( 880) 00:13:44.282 3.322 - 3.337: 52.2916% ( 615) 00:13:44.282 3.337 - 3.352: 56.0438% ( 614) 00:13:44.282 3.352 - 3.368: 63.0897% ( 1153) 00:13:44.282 3.368 - 3.383: 69.3962% ( 1032) 00:13:44.282 3.383 - 3.398: 74.7311% ( 873) 00:13:44.282 3.398 - 3.413: 80.1638% ( 889) 00:13:44.282 3.413 - 3.429: 83.5737% ( 558) 00:13:44.282 3.429 - 3.444: 85.7736% ( 360) 00:13:44.282 3.444 - 3.459: 86.9531% ( 193) 00:13:44.282 3.459 - 3.474: 87.6986% ( 122) 00:13:44.282 3.474 - 3.490: 88.1630% ( 76) 00:13:44.282 3.490 - 3.505: 88.6825% ( 85) 00:13:44.282 3.505 - 3.520: 89.3180% ( 104) 00:13:44.282 3.520 - 3.535: 90.1613% ( 138) 00:13:44.282 3.535 - 3.550: 91.2735% ( 182) 00:13:44.282 3.550 - 3.566: 92.2513% ( 160) 00:13:44.282 3.566 - 3.581: 92.9846% ( 120) 00:13:44.282 3.581 - 3.596: 93.7668% ( 128) 00:13:44.282 3.596 - 3.611: 94.5979% ( 136) 00:13:44.282 3.611 - 3.627: 95.5940% ( 163) 00:13:44.282 3.627 - 3.642: 96.5595% ( 158) 00:13:44.282 3.642 - 3.657: 97.4639% ( 148) 00:13:44.282 3.657 - 3.672: 98.0200% ( 91) 00:13:44.282 3.672 - 3.688: 98.4234% ( 66) 00:13:44.282 3.688 - 3.703: 98.7106% ( 47) 00:13:44.282 3.703 - 3.718: 99.0161% ( 50) 00:13:44.282 3.718 - 3.733: 99.2300% ( 35) 00:13:44.282 3.733 - 3.749: 99.4439% ( 35) 00:13:44.282 3.749 - 3.764: 99.5600% ( 19) 00:13:44.282 3.764 - 3.779: 99.5906% ( 5) 00:13:44.282 3.779 - 3.794: 99.6089% ( 3) 00:13:44.282 3.794 - 3.810: 99.6272% ( 3) 00:13:44.282 3.810 - 3.825: 99.6333% ( 1) 00:13:44.282 3.825 - 3.840: 99.6456% ( 2) 00:13:44.282 3.840 - 3.855: 99.6578% ( 2) 00:13:44.282 3.992 - 4.023: 99.6700% ( 2) 00:13:44.282 4.145 - 4.175: 99.6761% ( 1) 00:13:44.282 5.242 - 5.272: 99.6822% ( 1) 00:13:44.282 5.333 - 5.364: 99.6883% ( 1) 00:13:44.282 5.486 - 5.516: 99.7006% ( 2) 00:13:44.282 5.547 - 5.577: 99.7067% ( 1) 00:13:44.282 5.638 - 5.669: 99.7128% ( 1) 00:13:44.282 5.973 - 6.004: 99.7189% ( 1) 00:13:44.282 6.004 - 6.034: 99.7250% ( 1) 00:13:44.282 6.095 - 6.126: 99.7311% ( 1) 00:13:44.282 6.126 - 6.156: 99.7372% ( 1) 00:13:44.282 6.430 - 6.461: 99.7433% ( 1) 00:13:44.282 6.491 - 6.522: 99.7556% ( 2) 00:13:44.282 6.552 - 6.583: 99.7739% ( 3) 00:13:44.282 6.613 - 6.644: 99.7861% ( 2) 00:13:44.282 6.705 - 6.735: 99.7922% ( 1) 00:13:44.282 6.766 - 6.796: 99.7983% ( 1) 00:13:44.282 6.827 - 6.857: 99.8044% ( 1) 00:13:44.282 6.888 - 6.918: 99.8289% ( 4) 00:13:44.282 6.949 - 6.979: 99.8350% ( 1) 00:13:44.282 7.040 - 7.070: 99.8472% ( 2) 00:13:44.282 7.101 - 7.131: 99.8533% ( 1) 00:13:44.282 7.223 - 7.253: 99.8594% ( 1) 00:13:44.282 7.314 - 7.345: 99.8656% ( 1) 00:13:44.282 7.406 - 7.436: 99.8717% ( 1) 00:13:44.282 7.467 - 7.497: 99.8778% ( 1) 00:13:44.282 7.497 - 7.528: 99.8839% ( 1) 00:13:44.282 7.558 - 7.589: 99.8900% ( 1) 00:13:44.282 7.802 - 7.863: 99.8961% ( 1) 00:13:44.282 8.107 - 8.168: 99.9022% ( 1) 00:13:44.282 8.411 - 8.472: 99.9083% ( 1) 00:13:44.282 8.899 - 8.960: 99.9144% ( 1) 00:13:44.282 3994.575 - 4025.783: 100.0000% ( 14) 00:13:44.282 00:13:44.282 Complete histogram 00:13:44.282 ================== 00:13:44.282 Ra[2024-12-12 10:27:18.170048] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.282 nge in us Cumulative Count 00:13:44.282 1.813 - 1.821: 0.0061% ( 1) 00:13:44.282 1.821 - 1.829: 0.0856% ( 13) 00:13:44.282 1.829 - 1.836: 1.2405% ( 189) 00:13:44.282 1.836 - 1.844: 4.7910% ( 581) 00:13:44.282 1.844 - 1.851: 10.5537% ( 943) 00:13:44.282 1.851 - 1.859: 15.5585% ( 819) 00:13:44.282 1.859 - 1.867: 18.8340% ( 536) 00:13:44.282 1.867 - 1.874: 20.6551% ( 298) 00:13:44.282 1.874 - 1.882: 23.0445% ( 391) 00:13:44.282 1.882 - 1.890: 30.4754% ( 1216) 00:13:44.282 1.890 - 1.897: 46.3945% ( 2605) 00:13:44.282 1.897 - 1.905: 66.9152% ( 3358) 00:13:44.282 1.905 - 1.912: 83.1704% ( 2660) 00:13:44.282 1.912 - 1.920: 91.8418% ( 1419) 00:13:44.282 1.920 - 1.928: 95.8873% ( 662) 00:13:44.282 1.928 - 1.935: 97.7023% ( 297) 00:13:44.282 1.935 - 1.943: 98.6250% ( 151) 00:13:44.282 1.943 - 1.950: 99.0528% ( 70) 00:13:44.282 1.950 - 1.966: 99.2300% ( 29) 00:13:44.282 1.966 - 1.981: 99.2789% ( 8) 00:13:44.282 1.981 - 1.996: 99.2850% ( 1) 00:13:44.282 2.011 - 2.027: 99.2911% ( 1) 00:13:44.282 2.027 - 2.042: 99.2972% ( 1) 00:13:44.282 2.042 - 2.057: 99.3033% ( 1) 00:13:44.282 2.088 - 2.103: 99.3156% ( 2) 00:13:44.282 2.179 - 2.194: 99.3278% ( 2) 00:13:44.282 2.210 - 2.225: 99.3339% ( 1) 00:13:44.282 2.423 - 2.438: 99.3400% ( 1) 00:13:44.282 2.530 - 2.545: 99.3461% ( 1) 00:13:44.282 3.931 - 3.962: 99.3522% ( 1) 00:13:44.282 4.510 - 4.541: 99.3583% ( 1) 00:13:44.282 4.724 - 4.754: 99.3706% ( 2) 00:13:44.282 4.815 - 4.846: 99.3767% ( 1) 00:13:44.282 5.333 - 5.364: 99.3828% ( 1) 00:13:44.282 5.425 - 5.455: 99.3889% ( 1) 00:13:44.282 5.486 - 5.516: 99.3950% ( 1) 00:13:44.282 5.516 - 5.547: 99.4072% ( 2) 00:13:44.282 5.669 - 5.699: 99.4133% ( 1) 00:13:44.282 5.730 - 5.760: 99.4195% ( 1) 00:13:44.282 5.760 - 5.790: 99.4256% ( 1) 00:13:44.282 5.790 - 5.821: 99.4317% ( 1) 00:13:44.282 5.882 - 5.912: 99.4378% ( 1) 00:13:44.282 6.217 - 6.248: 99.4500% ( 2) 00:13:44.282 6.888 - 6.918: 99.4561% ( 1) 00:13:44.282 7.345 - 7.375: 99.4622% ( 1) 00:13:44.282 7.375 - 7.406: 99.4683% ( 1) 00:13:44.282 8.168 - 8.229: 99.4745% ( 1) 00:13:44.282 8.472 - 8.533: 99.4806% ( 1) 00:13:44.282 15.543 - 15.604: 99.4867% ( 1) 00:13:44.282 39.010 - 39.253: 99.4928% ( 1) 00:13:44.282 170.667 - 171.642: 99.4989% ( 1) 00:13:44.282 998.644 - 1006.446: 99.5050% ( 1) 00:13:44.282 2012.891 - 2028.495: 99.5172% ( 2) 00:13:44.282 2028.495 - 2044.099: 99.5233% ( 1) 00:13:44.282 2855.497 - 2871.101: 99.5295% ( 1) 00:13:44.282 3994.575 - 4025.783: 99.9878% ( 75) 00:13:44.282 5960.655 - 5991.863: 99.9939% ( 1) 00:13:44.282 5991.863 - 6023.070: 100.0000% ( 1) 00:13:44.282 00:13:44.282 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:44.282 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:44.282 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:44.282 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:44.282 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:44.541 [ 00:13:44.541 { 00:13:44.541 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.541 "subtype": "Discovery", 00:13:44.541 "listen_addresses": [], 00:13:44.541 "allow_any_host": true, 00:13:44.541 "hosts": [] 00:13:44.541 }, 00:13:44.541 { 00:13:44.541 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:44.541 "subtype": "NVMe", 00:13:44.541 "listen_addresses": [ 00:13:44.541 { 00:13:44.541 "trtype": "VFIOUSER", 00:13:44.541 "adrfam": "IPv4", 00:13:44.541 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:44.541 "trsvcid": "0" 00:13:44.541 } 00:13:44.541 ], 00:13:44.541 "allow_any_host": true, 00:13:44.541 "hosts": [], 00:13:44.541 "serial_number": "SPDK1", 00:13:44.541 "model_number": "SPDK bdev Controller", 00:13:44.541 "max_namespaces": 32, 00:13:44.541 "min_cntlid": 1, 00:13:44.541 "max_cntlid": 65519, 00:13:44.541 "namespaces": [ 00:13:44.541 { 00:13:44.541 "nsid": 1, 00:13:44.541 "bdev_name": "Malloc1", 00:13:44.541 "name": "Malloc1", 00:13:44.541 "nguid": "4B2809CBAA3A446ABC9964338BEA352E", 00:13:44.541 "uuid": "4b2809cb-aa3a-446a-bc99-64338bea352e" 00:13:44.541 } 00:13:44.541 ] 00:13:44.541 }, 00:13:44.541 { 00:13:44.541 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:44.541 "subtype": "NVMe", 00:13:44.541 "listen_addresses": [ 00:13:44.541 { 00:13:44.541 "trtype": "VFIOUSER", 00:13:44.541 "adrfam": "IPv4", 00:13:44.541 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:44.541 "trsvcid": "0" 00:13:44.541 } 00:13:44.541 ], 00:13:44.541 "allow_any_host": true, 00:13:44.541 "hosts": [], 00:13:44.541 "serial_number": "SPDK2", 00:13:44.541 "model_number": "SPDK bdev Controller", 00:13:44.541 "max_namespaces": 32, 00:13:44.541 "min_cntlid": 1, 00:13:44.541 "max_cntlid": 65519, 00:13:44.541 "namespaces": [ 00:13:44.541 { 00:13:44.541 "nsid": 1, 00:13:44.541 "bdev_name": "Malloc2", 00:13:44.541 "name": "Malloc2", 00:13:44.541 "nguid": "35D3B09687804F9B9A0BA7C60825904E", 00:13:44.541 "uuid": "35d3b096-8780-4f9b-9a0b-a7c60825904e" 00:13:44.541 } 00:13:44.541 ] 00:13:44.541 } 00:13:44.541 ] 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1482501 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:44.541 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:44.800 [2024-12-12 10:27:18.579990] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.800 Malloc3 00:13:44.800 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:45.059 [2024-12-12 10:27:18.823929] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.059 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:45.059 Asynchronous Event Request test 00:13:45.059 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.059 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.059 Registering asynchronous event callbacks... 00:13:45.059 Starting namespace attribute notice tests for all controllers... 00:13:45.059 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:45.059 aer_cb - Changed Namespace 00:13:45.059 Cleaning up... 00:13:45.059 [ 00:13:45.059 { 00:13:45.059 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:45.059 "subtype": "Discovery", 00:13:45.059 "listen_addresses": [], 00:13:45.059 "allow_any_host": true, 00:13:45.059 "hosts": [] 00:13:45.059 }, 00:13:45.059 { 00:13:45.059 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:45.059 "subtype": "NVMe", 00:13:45.059 "listen_addresses": [ 00:13:45.059 { 00:13:45.059 "trtype": "VFIOUSER", 00:13:45.059 "adrfam": "IPv4", 00:13:45.059 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:45.059 "trsvcid": "0" 00:13:45.059 } 00:13:45.059 ], 00:13:45.059 "allow_any_host": true, 00:13:45.059 "hosts": [], 00:13:45.059 "serial_number": "SPDK1", 00:13:45.059 "model_number": "SPDK bdev Controller", 00:13:45.059 "max_namespaces": 32, 00:13:45.059 "min_cntlid": 1, 00:13:45.059 "max_cntlid": 65519, 00:13:45.059 "namespaces": [ 00:13:45.059 { 00:13:45.059 "nsid": 1, 00:13:45.059 "bdev_name": "Malloc1", 00:13:45.059 "name": "Malloc1", 00:13:45.059 "nguid": "4B2809CBAA3A446ABC9964338BEA352E", 00:13:45.059 "uuid": "4b2809cb-aa3a-446a-bc99-64338bea352e" 00:13:45.059 }, 00:13:45.059 { 00:13:45.059 "nsid": 2, 00:13:45.059 "bdev_name": "Malloc3", 00:13:45.059 "name": "Malloc3", 00:13:45.059 "nguid": "738757497C0C4F0D9376F6C2C4E232AB", 00:13:45.059 "uuid": "73875749-7c0c-4f0d-9376-f6c2c4e232ab" 00:13:45.059 } 00:13:45.059 ] 00:13:45.059 }, 00:13:45.059 { 00:13:45.059 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:45.059 "subtype": "NVMe", 00:13:45.059 "listen_addresses": [ 00:13:45.059 { 00:13:45.059 "trtype": "VFIOUSER", 00:13:45.059 "adrfam": "IPv4", 00:13:45.059 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:45.059 "trsvcid": "0" 00:13:45.059 } 00:13:45.059 ], 00:13:45.059 "allow_any_host": true, 00:13:45.059 "hosts": [], 00:13:45.059 "serial_number": "SPDK2", 00:13:45.059 "model_number": "SPDK bdev Controller", 00:13:45.059 "max_namespaces": 32, 00:13:45.059 "min_cntlid": 1, 00:13:45.059 "max_cntlid": 65519, 00:13:45.059 "namespaces": [ 00:13:45.059 { 00:13:45.059 "nsid": 1, 00:13:45.059 "bdev_name": "Malloc2", 00:13:45.059 "name": "Malloc2", 00:13:45.059 "nguid": "35D3B09687804F9B9A0BA7C60825904E", 00:13:45.059 "uuid": "35d3b096-8780-4f9b-9a0b-a7c60825904e" 00:13:45.059 } 00:13:45.059 ] 00:13:45.059 } 00:13:45.059 ] 00:13:45.059 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1482501 00:13:45.059 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:45.059 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:45.059 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:45.059 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:45.059 [2024-12-12 10:27:19.059806] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:13:45.059 [2024-12-12 10:27:19.059834] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482656 ] 00:13:45.319 [2024-12-12 10:27:19.095426] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:45.319 [2024-12-12 10:27:19.103797] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:45.319 [2024-12-12 10:27:19.103824] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6f64ede000 00:13:45.320 [2024-12-12 10:27:19.104808] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.320 [2024-12-12 10:27:19.105815] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.320 [2024-12-12 10:27:19.106821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.320 [2024-12-12 10:27:19.107825] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:45.320 [2024-12-12 10:27:19.108832] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:45.320 [2024-12-12 10:27:19.109842] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.320 [2024-12-12 10:27:19.110849] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:45.320 [2024-12-12 10:27:19.111860] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.320 [2024-12-12 10:27:19.112869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:45.320 [2024-12-12 10:27:19.112879] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6f64ed3000 00:13:45.320 [2024-12-12 10:27:19.113797] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:45.320 [2024-12-12 10:27:19.127162] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:45.320 [2024-12-12 10:27:19.127185] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:45.320 [2024-12-12 10:27:19.129251] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:45.320 [2024-12-12 10:27:19.129287] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:45.320 [2024-12-12 10:27:19.129363] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:45.320 [2024-12-12 10:27:19.129378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:45.320 [2024-12-12 10:27:19.129383] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:45.320 [2024-12-12 10:27:19.130258] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:45.320 [2024-12-12 10:27:19.130268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:45.320 [2024-12-12 10:27:19.130275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:45.320 [2024-12-12 10:27:19.131263] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:45.320 [2024-12-12 10:27:19.131272] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:45.320 [2024-12-12 10:27:19.131279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:45.320 [2024-12-12 10:27:19.132275] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:45.320 [2024-12-12 10:27:19.132284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:45.320 [2024-12-12 10:27:19.133281] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:45.320 [2024-12-12 10:27:19.133290] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:45.320 [2024-12-12 10:27:19.133295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:45.320 [2024-12-12 10:27:19.133301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:45.320 [2024-12-12 10:27:19.133409] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:45.320 [2024-12-12 10:27:19.133413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:45.320 [2024-12-12 10:27:19.133418] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:45.320 [2024-12-12 10:27:19.134291] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:45.320 [2024-12-12 10:27:19.135292] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:45.320 [2024-12-12 10:27:19.136300] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:45.320 [2024-12-12 10:27:19.137300] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:45.320 [2024-12-12 10:27:19.137338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:45.320 [2024-12-12 10:27:19.138309] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:45.320 [2024-12-12 10:27:19.138319] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:45.320 [2024-12-12 10:27:19.138323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.138341] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:45.320 [2024-12-12 10:27:19.138348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.138358] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.320 [2024-12-12 10:27:19.138363] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.320 [2024-12-12 10:27:19.138366] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.320 [2024-12-12 10:27:19.138378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.320 [2024-12-12 10:27:19.144578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:45.320 [2024-12-12 10:27:19.144589] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:45.320 [2024-12-12 10:27:19.144594] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:45.320 [2024-12-12 10:27:19.144598] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:45.320 [2024-12-12 10:27:19.144602] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:45.320 [2024-12-12 10:27:19.144607] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:45.320 [2024-12-12 10:27:19.144611] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:45.320 [2024-12-12 10:27:19.144615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.144624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.144635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:45.320 [2024-12-12 10:27:19.152578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:45.320 [2024-12-12 10:27:19.152590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-12 10:27:19.152597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-12 10:27:19.152604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-12 10:27:19.152612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.320 [2024-12-12 10:27:19.152616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.152628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.152636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:45.320 [2024-12-12 10:27:19.159575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:45.320 [2024-12-12 10:27:19.159583] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:45.320 [2024-12-12 10:27:19.159587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.159594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.159599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.159607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:45.320 [2024-12-12 10:27:19.167575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:45.320 [2024-12-12 10:27:19.167627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.167637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:45.320 [2024-12-12 10:27:19.167645] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:45.320 [2024-12-12 10:27:19.167650] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:45.320 [2024-12-12 10:27:19.167654] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.320 [2024-12-12 10:27:19.167659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:45.320 [2024-12-12 10:27:19.173577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:45.320 [2024-12-12 10:27:19.173591] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:45.321 [2024-12-12 10:27:19.173604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.173611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.173618] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.321 [2024-12-12 10:27:19.173622] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.321 [2024-12-12 10:27:19.173625] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.321 [2024-12-12 10:27:19.173631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.321 [2024-12-12 10:27:19.182579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:45.321 [2024-12-12 10:27:19.182595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.182604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.182611] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.321 [2024-12-12 10:27:19.182615] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.321 [2024-12-12 10:27:19.182618] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.321 [2024-12-12 10:27:19.182624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.321 [2024-12-12 10:27:19.190575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:45.321 [2024-12-12 10:27:19.190585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.190591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.190598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.190604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.190608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.190612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.190617] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:45.321 [2024-12-12 10:27:19.190621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:45.321 [2024-12-12 10:27:19.190625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:45.321 [2024-12-12 10:27:19.190642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:45.321 [2024-12-12 10:27:19.198575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:45.321 [2024-12-12 10:27:19.198588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:45.321 [2024-12-12 10:27:19.206575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:45.321 [2024-12-12 10:27:19.206587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:45.321 [2024-12-12 10:27:19.214575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:45.321 [2024-12-12 10:27:19.214587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:45.321 [2024-12-12 10:27:19.222574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:45.321 [2024-12-12 10:27:19.222589] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:45.321 [2024-12-12 10:27:19.222593] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:45.321 [2024-12-12 10:27:19.222596] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:45.321 [2024-12-12 10:27:19.222601] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:45.321 [2024-12-12 10:27:19.222604] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:45.321 [2024-12-12 10:27:19.222610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:45.321 [2024-12-12 10:27:19.222616] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:45.321 [2024-12-12 10:27:19.222620] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:45.321 [2024-12-12 10:27:19.222623] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.321 [2024-12-12 10:27:19.222629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:45.321 [2024-12-12 10:27:19.222635] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:45.321 [2024-12-12 10:27:19.222638] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.321 [2024-12-12 10:27:19.222641] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.321 [2024-12-12 10:27:19.222646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.321 [2024-12-12 10:27:19.222653] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:45.321 [2024-12-12 10:27:19.222657] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:45.321 [2024-12-12 10:27:19.222659] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.321 [2024-12-12 10:27:19.222664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:45.321 [2024-12-12 10:27:19.230575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:45.321 [2024-12-12 10:27:19.230590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:45.321 [2024-12-12 10:27:19.230600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:45.321 [2024-12-12 10:27:19.230606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:45.321 ===================================================== 00:13:45.321 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:45.321 ===================================================== 00:13:45.321 Controller Capabilities/Features 00:13:45.321 ================================ 00:13:45.321 Vendor ID: 4e58 00:13:45.321 Subsystem Vendor ID: 4e58 00:13:45.321 Serial Number: SPDK2 00:13:45.321 Model Number: SPDK bdev Controller 00:13:45.321 Firmware Version: 25.01 00:13:45.321 Recommended Arb Burst: 6 00:13:45.321 IEEE OUI Identifier: 8d 6b 50 00:13:45.321 Multi-path I/O 00:13:45.321 May have multiple subsystem ports: Yes 00:13:45.321 May have multiple controllers: Yes 00:13:45.321 Associated with SR-IOV VF: No 00:13:45.321 Max Data Transfer Size: 131072 00:13:45.321 Max Number of Namespaces: 32 00:13:45.321 Max Number of I/O Queues: 127 00:13:45.321 NVMe Specification Version (VS): 1.3 00:13:45.321 NVMe Specification Version (Identify): 1.3 00:13:45.321 Maximum Queue Entries: 256 00:13:45.321 Contiguous Queues Required: Yes 00:13:45.321 Arbitration Mechanisms Supported 00:13:45.321 Weighted Round Robin: Not Supported 00:13:45.321 Vendor Specific: Not Supported 00:13:45.321 Reset Timeout: 15000 ms 00:13:45.321 Doorbell Stride: 4 bytes 00:13:45.321 NVM Subsystem Reset: Not Supported 00:13:45.321 Command Sets Supported 00:13:45.321 NVM Command Set: Supported 00:13:45.321 Boot Partition: Not Supported 00:13:45.321 Memory Page Size Minimum: 4096 bytes 00:13:45.321 Memory Page Size Maximum: 4096 bytes 00:13:45.321 Persistent Memory Region: Not Supported 00:13:45.321 Optional Asynchronous Events Supported 00:13:45.321 Namespace Attribute Notices: Supported 00:13:45.321 Firmware Activation Notices: Not Supported 00:13:45.321 ANA Change Notices: Not Supported 00:13:45.321 PLE Aggregate Log Change Notices: Not Supported 00:13:45.321 LBA Status Info Alert Notices: Not Supported 00:13:45.321 EGE Aggregate Log Change Notices: Not Supported 00:13:45.321 Normal NVM Subsystem Shutdown event: Not Supported 00:13:45.321 Zone Descriptor Change Notices: Not Supported 00:13:45.321 Discovery Log Change Notices: Not Supported 00:13:45.321 Controller Attributes 00:13:45.321 128-bit Host Identifier: Supported 00:13:45.321 Non-Operational Permissive Mode: Not Supported 00:13:45.321 NVM Sets: Not Supported 00:13:45.321 Read Recovery Levels: Not Supported 00:13:45.321 Endurance Groups: Not Supported 00:13:45.321 Predictable Latency Mode: Not Supported 00:13:45.321 Traffic Based Keep ALive: Not Supported 00:13:45.321 Namespace Granularity: Not Supported 00:13:45.321 SQ Associations: Not Supported 00:13:45.321 UUID List: Not Supported 00:13:45.321 Multi-Domain Subsystem: Not Supported 00:13:45.321 Fixed Capacity Management: Not Supported 00:13:45.321 Variable Capacity Management: Not Supported 00:13:45.321 Delete Endurance Group: Not Supported 00:13:45.321 Delete NVM Set: Not Supported 00:13:45.321 Extended LBA Formats Supported: Not Supported 00:13:45.321 Flexible Data Placement Supported: Not Supported 00:13:45.321 00:13:45.321 Controller Memory Buffer Support 00:13:45.321 ================================ 00:13:45.321 Supported: No 00:13:45.321 00:13:45.321 Persistent Memory Region Support 00:13:45.321 ================================ 00:13:45.321 Supported: No 00:13:45.321 00:13:45.321 Admin Command Set Attributes 00:13:45.321 ============================ 00:13:45.321 Security Send/Receive: Not Supported 00:13:45.321 Format NVM: Not Supported 00:13:45.321 Firmware Activate/Download: Not Supported 00:13:45.321 Namespace Management: Not Supported 00:13:45.322 Device Self-Test: Not Supported 00:13:45.322 Directives: Not Supported 00:13:45.322 NVMe-MI: Not Supported 00:13:45.322 Virtualization Management: Not Supported 00:13:45.322 Doorbell Buffer Config: Not Supported 00:13:45.322 Get LBA Status Capability: Not Supported 00:13:45.322 Command & Feature Lockdown Capability: Not Supported 00:13:45.322 Abort Command Limit: 4 00:13:45.322 Async Event Request Limit: 4 00:13:45.322 Number of Firmware Slots: N/A 00:13:45.322 Firmware Slot 1 Read-Only: N/A 00:13:45.322 Firmware Activation Without Reset: N/A 00:13:45.322 Multiple Update Detection Support: N/A 00:13:45.322 Firmware Update Granularity: No Information Provided 00:13:45.322 Per-Namespace SMART Log: No 00:13:45.322 Asymmetric Namespace Access Log Page: Not Supported 00:13:45.322 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:45.322 Command Effects Log Page: Supported 00:13:45.322 Get Log Page Extended Data: Supported 00:13:45.322 Telemetry Log Pages: Not Supported 00:13:45.322 Persistent Event Log Pages: Not Supported 00:13:45.322 Supported Log Pages Log Page: May Support 00:13:45.322 Commands Supported & Effects Log Page: Not Supported 00:13:45.322 Feature Identifiers & Effects Log Page:May Support 00:13:45.322 NVMe-MI Commands & Effects Log Page: May Support 00:13:45.322 Data Area 4 for Telemetry Log: Not Supported 00:13:45.322 Error Log Page Entries Supported: 128 00:13:45.322 Keep Alive: Supported 00:13:45.322 Keep Alive Granularity: 10000 ms 00:13:45.322 00:13:45.322 NVM Command Set Attributes 00:13:45.322 ========================== 00:13:45.322 Submission Queue Entry Size 00:13:45.322 Max: 64 00:13:45.322 Min: 64 00:13:45.322 Completion Queue Entry Size 00:13:45.322 Max: 16 00:13:45.322 Min: 16 00:13:45.322 Number of Namespaces: 32 00:13:45.322 Compare Command: Supported 00:13:45.322 Write Uncorrectable Command: Not Supported 00:13:45.322 Dataset Management Command: Supported 00:13:45.322 Write Zeroes Command: Supported 00:13:45.322 Set Features Save Field: Not Supported 00:13:45.322 Reservations: Not Supported 00:13:45.322 Timestamp: Not Supported 00:13:45.322 Copy: Supported 00:13:45.322 Volatile Write Cache: Present 00:13:45.322 Atomic Write Unit (Normal): 1 00:13:45.322 Atomic Write Unit (PFail): 1 00:13:45.322 Atomic Compare & Write Unit: 1 00:13:45.322 Fused Compare & Write: Supported 00:13:45.322 Scatter-Gather List 00:13:45.322 SGL Command Set: Supported (Dword aligned) 00:13:45.322 SGL Keyed: Not Supported 00:13:45.322 SGL Bit Bucket Descriptor: Not Supported 00:13:45.322 SGL Metadata Pointer: Not Supported 00:13:45.322 Oversized SGL: Not Supported 00:13:45.322 SGL Metadata Address: Not Supported 00:13:45.322 SGL Offset: Not Supported 00:13:45.322 Transport SGL Data Block: Not Supported 00:13:45.322 Replay Protected Memory Block: Not Supported 00:13:45.322 00:13:45.322 Firmware Slot Information 00:13:45.322 ========================= 00:13:45.322 Active slot: 1 00:13:45.322 Slot 1 Firmware Revision: 25.01 00:13:45.322 00:13:45.322 00:13:45.322 Commands Supported and Effects 00:13:45.322 ============================== 00:13:45.322 Admin Commands 00:13:45.322 -------------- 00:13:45.322 Get Log Page (02h): Supported 00:13:45.322 Identify (06h): Supported 00:13:45.322 Abort (08h): Supported 00:13:45.322 Set Features (09h): Supported 00:13:45.322 Get Features (0Ah): Supported 00:13:45.322 Asynchronous Event Request (0Ch): Supported 00:13:45.322 Keep Alive (18h): Supported 00:13:45.322 I/O Commands 00:13:45.322 ------------ 00:13:45.322 Flush (00h): Supported LBA-Change 00:13:45.322 Write (01h): Supported LBA-Change 00:13:45.322 Read (02h): Supported 00:13:45.322 Compare (05h): Supported 00:13:45.322 Write Zeroes (08h): Supported LBA-Change 00:13:45.322 Dataset Management (09h): Supported LBA-Change 00:13:45.322 Copy (19h): Supported LBA-Change 00:13:45.322 00:13:45.322 Error Log 00:13:45.322 ========= 00:13:45.322 00:13:45.322 Arbitration 00:13:45.322 =========== 00:13:45.322 Arbitration Burst: 1 00:13:45.322 00:13:45.322 Power Management 00:13:45.322 ================ 00:13:45.322 Number of Power States: 1 00:13:45.322 Current Power State: Power State #0 00:13:45.322 Power State #0: 00:13:45.322 Max Power: 0.00 W 00:13:45.322 Non-Operational State: Operational 00:13:45.322 Entry Latency: Not Reported 00:13:45.322 Exit Latency: Not Reported 00:13:45.322 Relative Read Throughput: 0 00:13:45.322 Relative Read Latency: 0 00:13:45.322 Relative Write Throughput: 0 00:13:45.322 Relative Write Latency: 0 00:13:45.322 Idle Power: Not Reported 00:13:45.322 Active Power: Not Reported 00:13:45.322 Non-Operational Permissive Mode: Not Supported 00:13:45.322 00:13:45.322 Health Information 00:13:45.322 ================== 00:13:45.322 Critical Warnings: 00:13:45.322 Available Spare Space: OK 00:13:45.322 Temperature: OK 00:13:45.322 Device Reliability: OK 00:13:45.322 Read Only: No 00:13:45.322 Volatile Memory Backup: OK 00:13:45.322 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:45.322 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:45.322 Available Spare: 0% 00:13:45.322 Available Sp[2024-12-12 10:27:19.230694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:45.322 [2024-12-12 10:27:19.238575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:45.322 [2024-12-12 10:27:19.238603] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:45.322 [2024-12-12 10:27:19.238611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.322 [2024-12-12 10:27:19.238616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.322 [2024-12-12 10:27:19.238622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.322 [2024-12-12 10:27:19.238627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.322 [2024-12-12 10:27:19.238674] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:45.322 [2024-12-12 10:27:19.238684] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:45.322 [2024-12-12 10:27:19.239678] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.322 [2024-12-12 10:27:19.239720] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:45.322 [2024-12-12 10:27:19.239726] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:45.322 [2024-12-12 10:27:19.240687] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:45.322 [2024-12-12 10:27:19.240698] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:45.322 [2024-12-12 10:27:19.240748] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:45.322 [2024-12-12 10:27:19.241697] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:45.322 are Threshold: 0% 00:13:45.322 Life Percentage Used: 0% 00:13:45.322 Data Units Read: 0 00:13:45.322 Data Units Written: 0 00:13:45.322 Host Read Commands: 0 00:13:45.322 Host Write Commands: 0 00:13:45.322 Controller Busy Time: 0 minutes 00:13:45.322 Power Cycles: 0 00:13:45.322 Power On Hours: 0 hours 00:13:45.322 Unsafe Shutdowns: 0 00:13:45.322 Unrecoverable Media Errors: 0 00:13:45.322 Lifetime Error Log Entries: 0 00:13:45.322 Warning Temperature Time: 0 minutes 00:13:45.322 Critical Temperature Time: 0 minutes 00:13:45.322 00:13:45.322 Number of Queues 00:13:45.322 ================ 00:13:45.322 Number of I/O Submission Queues: 127 00:13:45.322 Number of I/O Completion Queues: 127 00:13:45.322 00:13:45.322 Active Namespaces 00:13:45.322 ================= 00:13:45.322 Namespace ID:1 00:13:45.322 Error Recovery Timeout: Unlimited 00:13:45.322 Command Set Identifier: NVM (00h) 00:13:45.322 Deallocate: Supported 00:13:45.322 Deallocated/Unwritten Error: Not Supported 00:13:45.322 Deallocated Read Value: Unknown 00:13:45.322 Deallocate in Write Zeroes: Not Supported 00:13:45.322 Deallocated Guard Field: 0xFFFF 00:13:45.322 Flush: Supported 00:13:45.322 Reservation: Supported 00:13:45.322 Namespace Sharing Capabilities: Multiple Controllers 00:13:45.322 Size (in LBAs): 131072 (0GiB) 00:13:45.322 Capacity (in LBAs): 131072 (0GiB) 00:13:45.322 Utilization (in LBAs): 131072 (0GiB) 00:13:45.322 NGUID: 35D3B09687804F9B9A0BA7C60825904E 00:13:45.322 UUID: 35d3b096-8780-4f9b-9a0b-a7c60825904e 00:13:45.322 Thin Provisioning: Not Supported 00:13:45.322 Per-NS Atomic Units: Yes 00:13:45.322 Atomic Boundary Size (Normal): 0 00:13:45.322 Atomic Boundary Size (PFail): 0 00:13:45.322 Atomic Boundary Offset: 0 00:13:45.323 Maximum Single Source Range Length: 65535 00:13:45.323 Maximum Copy Length: 65535 00:13:45.323 Maximum Source Range Count: 1 00:13:45.323 NGUID/EUI64 Never Reused: No 00:13:45.323 Namespace Write Protected: No 00:13:45.323 Number of LBA Formats: 1 00:13:45.323 Current LBA Format: LBA Format #00 00:13:45.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:45.323 00:13:45.323 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:45.581 [2024-12-12 10:27:19.460766] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:50.848 Initializing NVMe Controllers 00:13:50.848 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.848 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:50.848 Initialization complete. Launching workers. 00:13:50.848 ======================================================== 00:13:50.848 Latency(us) 00:13:50.848 Device Information : IOPS MiB/s Average min max 00:13:50.848 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.57 156.06 3203.79 972.77 6753.91 00:13:50.848 ======================================================== 00:13:50.848 Total : 39950.57 156.06 3203.79 972.77 6753.91 00:13:50.848 00:13:50.848 [2024-12-12 10:27:24.566836] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:50.848 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:50.848 [2024-12-12 10:27:24.805540] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.114 Initializing NVMe Controllers 00:13:56.114 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:56.114 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:56.114 Initialization complete. Launching workers. 00:13:56.114 ======================================================== 00:13:56.114 Latency(us) 00:13:56.114 Device Information : IOPS MiB/s Average min max 00:13:56.114 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39948.59 156.05 3204.65 980.71 9589.60 00:13:56.114 ======================================================== 00:13:56.114 Total : 39948.59 156.05 3204.65 980.71 9589.60 00:13:56.114 00:13:56.114 [2024-12-12 10:27:29.826281] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.114 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:56.114 [2024-12-12 10:27:30.032010] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:01.383 [2024-12-12 10:27:35.166665] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:01.383 Initializing NVMe Controllers 00:14:01.383 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:01.383 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:01.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:01.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:01.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:01.383 Initialization complete. Launching workers. 00:14:01.383 Starting thread on core 2 00:14:01.383 Starting thread on core 3 00:14:01.383 Starting thread on core 1 00:14:01.383 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:01.641 [2024-12-12 10:27:35.459958] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.929 [2024-12-12 10:27:38.514291] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:04.929 Initializing NVMe Controllers 00:14:04.929 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:04.929 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:04.929 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:04.929 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:04.929 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:04.929 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:04.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:04.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:04.929 Initialization complete. Launching workers. 00:14:04.929 Starting thread on core 1 with urgent priority queue 00:14:04.929 Starting thread on core 2 with urgent priority queue 00:14:04.929 Starting thread on core 3 with urgent priority queue 00:14:04.929 Starting thread on core 0 with urgent priority queue 00:14:04.929 SPDK bdev Controller (SPDK2 ) core 0: 9331.00 IO/s 10.72 secs/100000 ios 00:14:04.929 SPDK bdev Controller (SPDK2 ) core 1: 8245.67 IO/s 12.13 secs/100000 ios 00:14:04.929 SPDK bdev Controller (SPDK2 ) core 2: 8519.33 IO/s 11.74 secs/100000 ios 00:14:04.929 SPDK bdev Controller (SPDK2 ) core 3: 9794.33 IO/s 10.21 secs/100000 ios 00:14:04.929 ======================================================== 00:14:04.929 00:14:04.929 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:04.929 [2024-12-12 10:27:38.796992] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.929 Initializing NVMe Controllers 00:14:04.929 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:04.929 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:04.929 Namespace ID: 1 size: 0GB 00:14:04.929 Initialization complete. 00:14:04.929 INFO: using host memory buffer for IO 00:14:04.929 Hello world! 00:14:04.929 [2024-12-12 10:27:38.809065] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:04.929 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:05.187 [2024-12-12 10:27:39.083951] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.563 Initializing NVMe Controllers 00:14:06.563 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:06.563 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:06.563 Initialization complete. Launching workers. 00:14:06.563 submit (in ns) avg, min, max = 6341.6, 3178.1, 4006244.8 00:14:06.563 complete (in ns) avg, min, max = 23434.5, 1770.5, 3999164.8 00:14:06.563 00:14:06.563 Submit histogram 00:14:06.563 ================ 00:14:06.563 Range in us Cumulative Count 00:14:06.563 3.170 - 3.185: 0.0247% ( 4) 00:14:06.563 3.185 - 3.200: 0.5861% ( 91) 00:14:06.563 3.200 - 3.215: 3.2326% ( 429) 00:14:06.563 3.215 - 3.230: 8.8341% ( 908) 00:14:06.563 3.230 - 3.246: 14.1888% ( 868) 00:14:06.563 3.246 - 3.261: 20.7896% ( 1070) 00:14:06.563 3.261 - 3.276: 28.5071% ( 1251) 00:14:06.563 3.276 - 3.291: 35.0586% ( 1062) 00:14:06.563 3.291 - 3.307: 40.7279% ( 919) 00:14:06.563 3.307 - 3.322: 44.9044% ( 677) 00:14:06.563 3.322 - 3.337: 49.3523% ( 721) 00:14:06.563 3.337 - 3.352: 52.8007% ( 559) 00:14:06.563 3.352 - 3.368: 58.0074% ( 844) 00:14:06.563 3.368 - 3.383: 65.7680% ( 1258) 00:14:06.563 3.383 - 3.398: 70.6724% ( 795) 00:14:06.563 3.398 - 3.413: 76.3849% ( 926) 00:14:06.563 3.413 - 3.429: 80.8328% ( 721) 00:14:06.563 3.429 - 3.444: 83.6521% ( 457) 00:14:06.563 3.444 - 3.459: 85.2622% ( 261) 00:14:06.563 3.459 - 3.474: 86.0518% ( 128) 00:14:06.563 3.474 - 3.490: 86.4405% ( 63) 00:14:06.563 3.490 - 3.505: 86.9278% ( 79) 00:14:06.563 3.505 - 3.520: 87.6126% ( 111) 00:14:06.563 3.520 - 3.535: 88.4886% ( 142) 00:14:06.563 3.535 - 3.550: 89.4941% ( 163) 00:14:06.563 3.550 - 3.566: 90.5676% ( 174) 00:14:06.563 3.566 - 3.581: 91.4189% ( 138) 00:14:06.563 3.581 - 3.596: 92.2579% ( 136) 00:14:06.563 3.596 - 3.611: 93.0907% ( 135) 00:14:06.563 3.611 - 3.627: 93.9914% ( 146) 00:14:06.563 3.627 - 3.642: 94.9537% ( 156) 00:14:06.563 3.642 - 3.657: 95.8359% ( 143) 00:14:06.563 3.657 - 3.672: 96.5207% ( 111) 00:14:06.563 3.672 - 3.688: 96.9340% ( 67) 00:14:06.563 3.688 - 3.703: 97.3288% ( 64) 00:14:06.563 3.703 - 3.718: 97.6990% ( 60) 00:14:06.563 3.718 - 3.733: 97.9951% ( 48) 00:14:06.564 3.733 - 3.749: 98.2357% ( 39) 00:14:06.564 3.749 - 3.764: 98.3775% ( 23) 00:14:06.564 3.764 - 3.779: 98.5688% ( 31) 00:14:06.564 3.779 - 3.794: 98.6428% ( 12) 00:14:06.564 3.794 - 3.810: 98.7600% ( 19) 00:14:06.564 3.810 - 3.825: 98.8341% ( 12) 00:14:06.564 3.825 - 3.840: 98.8896% ( 9) 00:14:06.564 3.840 - 3.855: 98.9389% ( 8) 00:14:06.564 3.855 - 3.870: 98.9759% ( 6) 00:14:06.564 3.870 - 3.886: 99.0130% ( 6) 00:14:06.564 3.886 - 3.901: 99.0500% ( 6) 00:14:06.564 3.901 - 3.931: 99.0993% ( 8) 00:14:06.564 3.931 - 3.962: 99.1487% ( 8) 00:14:06.564 3.962 - 3.992: 99.1919% ( 7) 00:14:06.564 3.992 - 4.023: 99.2289% ( 6) 00:14:06.564 4.023 - 4.053: 99.2535% ( 4) 00:14:06.564 4.053 - 4.084: 99.2597% ( 1) 00:14:06.564 4.084 - 4.114: 99.2782% ( 3) 00:14:06.564 4.114 - 4.145: 99.3029% ( 4) 00:14:06.564 4.145 - 4.175: 99.3337% ( 5) 00:14:06.564 4.175 - 4.206: 99.3523% ( 3) 00:14:06.564 4.206 - 4.236: 99.3708% ( 3) 00:14:06.564 4.236 - 4.267: 99.3769% ( 1) 00:14:06.564 4.267 - 4.297: 99.3831% ( 1) 00:14:06.564 4.297 - 4.328: 99.3893% ( 1) 00:14:06.564 4.328 - 4.358: 99.3954% ( 1) 00:14:06.564 4.358 - 4.389: 99.4016% ( 1) 00:14:06.564 4.450 - 4.480: 99.4078% ( 1) 00:14:06.564 4.480 - 4.510: 99.4139% ( 1) 00:14:06.564 4.632 - 4.663: 99.4201% ( 1) 00:14:06.564 4.663 - 4.693: 99.4263% ( 1) 00:14:06.564 4.785 - 4.815: 99.4324% ( 1) 00:14:06.564 4.937 - 4.968: 99.4386% ( 1) 00:14:06.564 5.059 - 5.090: 99.4448% ( 1) 00:14:06.564 5.150 - 5.181: 99.4510% ( 1) 00:14:06.564 5.181 - 5.211: 99.4571% ( 1) 00:14:06.564 5.211 - 5.242: 99.4633% ( 1) 00:14:06.564 5.272 - 5.303: 99.4756% ( 2) 00:14:06.564 5.608 - 5.638: 99.4818% ( 1) 00:14:06.564 5.638 - 5.669: 99.4880% ( 1) 00:14:06.564 5.699 - 5.730: 99.5003% ( 2) 00:14:06.564 5.821 - 5.851: 99.5065% ( 1) 00:14:06.564 5.851 - 5.882: 99.5126% ( 1) 00:14:06.564 5.912 - 5.943: 99.5188% ( 1) 00:14:06.564 6.004 - 6.034: 99.5312% ( 2) 00:14:06.564 6.034 - 6.065: 99.5373% ( 1) 00:14:06.564 6.126 - 6.156: 99.5497% ( 2) 00:14:06.564 6.156 - 6.187: 99.5558% ( 1) 00:14:06.564 6.248 - 6.278: 99.5620% ( 1) 00:14:06.564 6.370 - 6.400: 99.5682% ( 1) 00:14:06.564 6.430 - 6.461: 99.5743% ( 1) 00:14:06.564 6.491 - 6.522: 99.5805% ( 1) 00:14:06.564 6.522 - 6.552: 99.5867% ( 1) 00:14:06.564 6.552 - 6.583: 99.5990% ( 2) 00:14:06.564 6.613 - 6.644: 99.6114% ( 2) 00:14:06.564 6.644 - 6.674: 99.6175% ( 1) 00:14:06.564 6.674 - 6.705: 99.6237% ( 1) 00:14:06.564 6.705 - 6.735: 99.6299% ( 1) 00:14:06.564 6.735 - 6.766: 99.6360% ( 1) 00:14:06.564 6.796 - 6.827: 99.6422% ( 1) 00:14:06.564 6.827 - 6.857: 99.6669% ( 4) 00:14:06.564 6.857 - 6.888: 99.6792% ( 2) 00:14:06.564 6.888 - 6.918: 99.6854% ( 1) 00:14:06.564 7.010 - 7.040: 99.6915% ( 1) 00:14:06.564 7.131 - 7.162: 99.6977% ( 1) 00:14:06.564 7.162 - 7.192: 99.7039% ( 1) 00:14:06.564 7.223 - 7.253: 99.7101% ( 1) 00:14:06.564 7.284 - 7.314: 99.7162% ( 1) 00:14:06.564 7.345 - 7.375: 99.7224% ( 1) 00:14:06.564 7.406 - 7.436: 99.7286% ( 1) 00:14:06.564 7.436 - 7.467: 99.7347% ( 1) 00:14:06.564 7.467 - 7.497: 99.7471% ( 2) 00:14:06.564 7.558 - 7.589: 99.7532% ( 1) 00:14:06.564 7.589 - 7.619: 99.7594% ( 1) 00:14:06.564 7.680 - 7.710: 99.7656% ( 1) 00:14:06.564 7.710 - 7.741: 99.7717% ( 1) 00:14:06.564 7.802 - 7.863: 99.7779% ( 1) 00:14:06.564 7.985 - 8.046: 99.7903% ( 2) 00:14:06.564 8.046 - 8.107: 99.7964% ( 1) 00:14:06.564 8.107 - 8.168: 99.8026% ( 1) 00:14:06.564 8.168 - 8.229: 99.8088% ( 1) 00:14:06.564 8.229 - 8.290: 99.8211% ( 2) 00:14:06.564 8.290 - 8.350: 99.8334% ( 2) 00:14:06.564 8.350 - 8.411: 99.8396% ( 1) 00:14:06.564 8.594 - 8.655: 99.8581% ( 3) 00:14:06.564 8.960 - 9.021: 99.8643% ( 1) 00:14:06.564 9.874 - 9.935: 99.8705% ( 1) 00:14:06.564 10.667 - 10.728: 99.8766% ( 1) 00:14:06.564 11.154 - 11.215: 99.8828% ( 1) 00:14:06.564 11.276 - 11.337: 99.8890% ( 1) 00:14:06.564 11.337 - 11.398: 99.8951% ( 1) 00:14:06.564 11.581 - 11.642: 99.9013% ( 1) 00:14:06.564 16.579 - 16.701: 99.9075% ( 1) 00:14:06.564 19.017 - 19.139: 99.9136% ( 1) 00:14:06.564 19.139 - 19.261: 99.9198% ( 1) 00:14:06.564 21.090 - 21.211: 99.9260% ( 1) 00:14:06.564 3994.575 - 4025.783: 100.0000% ( 12) 00:14:06.564 00:14:06.564 Complete histogram 00:14:06.564 ================== 00:14:06.564 Range in us Cumulative Count 00:14:06.564 1.768 - 1.775: 0.1666% ( 27) 00:14:06.564 1.775 - 1.783: 2.1221% ( 317) 00:14:06.564 1.783 - 1.790: 11.4436% ( 1511) 00:14:06.564 1.790 - 1.798: 25.1018% ( 2214) 00:14:06.564 1.798 - 1.806: 34.0099% ( 1444) 00:14:06.564 1.806 - 1.813: 38.1123% ( 665) 00:14:06.564 1.813 - 1.821: 40.1912% ( 337) 00:14:06.564 1.821 - 1.829: 41.5731% ( 224) 00:14:06.564 1.829 - 1.836: 42.2209% ( 105) 00:14:06.564 1.836 - 1.844: 44.4911% ( 368) 00:14:06.564 1.844 - 1.851: 54.1888% ( 1572) 00:14:06.564 1.851 - 1.859: 71.7397% ( 2845) 00:14:06.564 1.859 - 1.867: 84.6638% ( 2095) 00:14:06.564 1.867 - 1.874: 90.4750% ( 942) 00:14:06.564 1.874 - 1.882: 92.8748% ( 389) 00:14:06.564 1.882 - 1.890: 94.3368% ( 237) 00:14:06.564 1.890 - 1.897: 95.1820% ( 137) 00:14:06.564 1.897 - 1.905: 95.5583% ( 61) 00:14:06.564 1.905 - 1.912: 96.0271% ( 76) 00:14:06.564 1.912 - 1.920: 96.4220% ( 64) 00:14:06.564 1.920 - 1.928: 96.7428% ( 52) 00:14:06.564 1.928 - 1.935: 97.1684% ( 69) 00:14:06.564 1.935 - 1.943: 97.4892% ( 52) 00:14:06.564 1.943 - 1.950: 97.6866% ( 32) 00:14:06.564 1.950 - 1.966: 97.8285% ( 23) 00:14:06.564 1.966 - 1.981: 97.9457% ( 19) 00:14:06.564 1.981 - 1.996: 97.9951% ( 8) 00:14:06.564 1.996 - 2.011: 98.0197% ( 4) 00:14:06.564 2.011 - 2.027: 98.0629% ( 7) 00:14:06.564 2.027 - 2.042: 98.1246% ( 10) 00:14:06.564 2.042 - 2.0[2024-12-12 10:27:40.186689] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.564 57: 98.1801% ( 9) 00:14:06.564 2.057 - 2.072: 98.1863% ( 1) 00:14:06.564 2.072 - 2.088: 98.1986% ( 2) 00:14:06.564 2.088 - 2.103: 98.3097% ( 18) 00:14:06.564 2.103 - 2.118: 98.3899% ( 13) 00:14:06.564 2.118 - 2.133: 98.4022% ( 2) 00:14:06.564 2.164 - 2.179: 98.4886% ( 14) 00:14:06.564 2.179 - 2.194: 98.5811% ( 15) 00:14:06.564 2.194 - 2.210: 98.6120% ( 5) 00:14:06.564 2.210 - 2.225: 98.6181% ( 1) 00:14:06.564 2.225 - 2.240: 98.6428% ( 4) 00:14:06.564 2.240 - 2.255: 98.7785% ( 22) 00:14:06.564 2.255 - 2.270: 98.8587% ( 13) 00:14:06.564 2.270 - 2.286: 98.9266% ( 11) 00:14:06.564 2.286 - 2.301: 98.9698% ( 7) 00:14:06.564 2.301 - 2.316: 99.0006% ( 5) 00:14:06.564 2.316 - 2.331: 99.0191% ( 3) 00:14:06.564 2.347 - 2.362: 99.0376% ( 3) 00:14:06.564 2.362 - 2.377: 99.0500% ( 2) 00:14:06.564 2.377 - 2.392: 99.0561% ( 1) 00:14:06.564 2.392 - 2.408: 99.0623% ( 1) 00:14:06.564 2.423 - 2.438: 99.0685% ( 1) 00:14:06.564 2.499 - 2.514: 99.0746% ( 1) 00:14:06.564 2.575 - 2.590: 99.0808% ( 1) 00:14:06.564 2.590 - 2.606: 99.0870% ( 1) 00:14:06.564 2.636 - 2.651: 99.0932% ( 1) 00:14:06.564 2.667 - 2.682: 99.0993% ( 1) 00:14:06.564 2.697 - 2.712: 99.1055% ( 1) 00:14:06.564 2.728 - 2.743: 99.1117% ( 1) 00:14:06.564 2.773 - 2.789: 99.1178% ( 1) 00:14:06.564 2.850 - 2.865: 99.1240% ( 1) 00:14:06.564 3.002 - 3.017: 99.1302% ( 1) 00:14:06.564 3.093 - 3.109: 99.1363% ( 1) 00:14:06.564 3.261 - 3.276: 99.1425% ( 1) 00:14:06.564 3.657 - 3.672: 99.1487% ( 1) 00:14:06.564 3.764 - 3.779: 99.1548% ( 1) 00:14:06.564 3.962 - 3.992: 99.1610% ( 1) 00:14:06.564 3.992 - 4.023: 99.1672% ( 1) 00:14:06.564 4.114 - 4.145: 99.1733% ( 1) 00:14:06.564 4.145 - 4.175: 99.1795% ( 1) 00:14:06.564 4.328 - 4.358: 99.1857% ( 1) 00:14:06.564 4.419 - 4.450: 99.1919% ( 1) 00:14:06.564 4.510 - 4.541: 99.1980% ( 1) 00:14:06.564 4.724 - 4.754: 99.2042% ( 1) 00:14:06.564 4.876 - 4.907: 99.2104% ( 1) 00:14:06.564 4.937 - 4.968: 99.2165% ( 1) 00:14:06.564 5.090 - 5.120: 99.2227% ( 1) 00:14:06.564 5.211 - 5.242: 99.2289% ( 1) 00:14:06.564 5.303 - 5.333: 99.2412% ( 2) 00:14:06.564 5.333 - 5.364: 99.2474% ( 1) 00:14:06.564 5.364 - 5.394: 99.2535% ( 1) 00:14:06.564 5.425 - 5.455: 99.2597% ( 1) 00:14:06.564 5.455 - 5.486: 99.2659% ( 1) 00:14:06.564 5.821 - 5.851: 99.2721% ( 1) 00:14:06.564 5.882 - 5.912: 99.2782% ( 1) 00:14:06.564 6.034 - 6.065: 99.2844% ( 1) 00:14:06.564 6.095 - 6.126: 99.2906% ( 1) 00:14:06.564 6.187 - 6.217: 99.3029% ( 2) 00:14:06.564 6.217 - 6.248: 99.3091% ( 1) 00:14:06.564 6.400 - 6.430: 99.3152% ( 1) 00:14:06.564 6.491 - 6.522: 99.3214% ( 1) 00:14:06.564 6.918 - 6.949: 99.3276% ( 1) 00:14:06.565 7.192 - 7.223: 99.3337% ( 1) 00:14:06.565 7.406 - 7.436: 99.3399% ( 1) 00:14:06.565 7.497 - 7.528: 99.3461% ( 1) 00:14:06.565 7.558 - 7.589: 99.3523% ( 1) 00:14:06.565 7.802 - 7.863: 99.3584% ( 1) 00:14:06.565 7.924 - 7.985: 99.3646% ( 1) 00:14:06.565 8.290 - 8.350: 99.3708% ( 1) 00:14:06.565 10.667 - 10.728: 99.3769% ( 1) 00:14:06.565 11.154 - 11.215: 99.3831% ( 1) 00:14:06.565 11.215 - 11.276: 99.3893% ( 1) 00:14:06.565 12.130 - 12.190: 99.3954% ( 1) 00:14:06.565 12.373 - 12.434: 99.4016% ( 1) 00:14:06.565 12.983 - 13.044: 99.4078% ( 1) 00:14:06.565 13.775 - 13.836: 99.4139% ( 1) 00:14:06.565 15.604 - 15.726: 99.4201% ( 1) 00:14:06.565 17.798 - 17.920: 99.4263% ( 1) 00:14:06.565 19.627 - 19.749: 99.4324% ( 1) 00:14:06.565 20.480 - 20.602: 99.4386% ( 1) 00:14:06.565 24.625 - 24.747: 99.4448% ( 1) 00:14:06.565 26.941 - 27.063: 99.4510% ( 1) 00:14:06.565 32.427 - 32.670: 99.4571% ( 1) 00:14:06.565 2168.930 - 2184.533: 99.4633% ( 1) 00:14:06.565 3994.575 - 4025.783: 100.0000% ( 87) 00:14:06.565 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:06.565 [ 00:14:06.565 { 00:14:06.565 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.565 "subtype": "Discovery", 00:14:06.565 "listen_addresses": [], 00:14:06.565 "allow_any_host": true, 00:14:06.565 "hosts": [] 00:14:06.565 }, 00:14:06.565 { 00:14:06.565 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.565 "subtype": "NVMe", 00:14:06.565 "listen_addresses": [ 00:14:06.565 { 00:14:06.565 "trtype": "VFIOUSER", 00:14:06.565 "adrfam": "IPv4", 00:14:06.565 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.565 "trsvcid": "0" 00:14:06.565 } 00:14:06.565 ], 00:14:06.565 "allow_any_host": true, 00:14:06.565 "hosts": [], 00:14:06.565 "serial_number": "SPDK1", 00:14:06.565 "model_number": "SPDK bdev Controller", 00:14:06.565 "max_namespaces": 32, 00:14:06.565 "min_cntlid": 1, 00:14:06.565 "max_cntlid": 65519, 00:14:06.565 "namespaces": [ 00:14:06.565 { 00:14:06.565 "nsid": 1, 00:14:06.565 "bdev_name": "Malloc1", 00:14:06.565 "name": "Malloc1", 00:14:06.565 "nguid": "4B2809CBAA3A446ABC9964338BEA352E", 00:14:06.565 "uuid": "4b2809cb-aa3a-446a-bc99-64338bea352e" 00:14:06.565 }, 00:14:06.565 { 00:14:06.565 "nsid": 2, 00:14:06.565 "bdev_name": "Malloc3", 00:14:06.565 "name": "Malloc3", 00:14:06.565 "nguid": "738757497C0C4F0D9376F6C2C4E232AB", 00:14:06.565 "uuid": "73875749-7c0c-4f0d-9376-f6c2c4e232ab" 00:14:06.565 } 00:14:06.565 ] 00:14:06.565 }, 00:14:06.565 { 00:14:06.565 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.565 "subtype": "NVMe", 00:14:06.565 "listen_addresses": [ 00:14:06.565 { 00:14:06.565 "trtype": "VFIOUSER", 00:14:06.565 "adrfam": "IPv4", 00:14:06.565 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.565 "trsvcid": "0" 00:14:06.565 } 00:14:06.565 ], 00:14:06.565 "allow_any_host": true, 00:14:06.565 "hosts": [], 00:14:06.565 "serial_number": "SPDK2", 00:14:06.565 "model_number": "SPDK bdev Controller", 00:14:06.565 "max_namespaces": 32, 00:14:06.565 "min_cntlid": 1, 00:14:06.565 "max_cntlid": 65519, 00:14:06.565 "namespaces": [ 00:14:06.565 { 00:14:06.565 "nsid": 1, 00:14:06.565 "bdev_name": "Malloc2", 00:14:06.565 "name": "Malloc2", 00:14:06.565 "nguid": "35D3B09687804F9B9A0BA7C60825904E", 00:14:06.565 "uuid": "35d3b096-8780-4f9b-9a0b-a7c60825904e" 00:14:06.565 } 00:14:06.565 ] 00:14:06.565 } 00:14:06.565 ] 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1486056 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:06.565 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:06.824 [2024-12-12 10:27:40.598029] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.824 Malloc4 00:14:06.824 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:06.824 [2024-12-12 10:27:40.839870] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.083 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:07.083 Asynchronous Event Request test 00:14:07.083 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.083 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.083 Registering asynchronous event callbacks... 00:14:07.083 Starting namespace attribute notice tests for all controllers... 00:14:07.083 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:07.083 aer_cb - Changed Namespace 00:14:07.083 Cleaning up... 00:14:07.083 [ 00:14:07.083 { 00:14:07.083 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:07.083 "subtype": "Discovery", 00:14:07.083 "listen_addresses": [], 00:14:07.083 "allow_any_host": true, 00:14:07.083 "hosts": [] 00:14:07.083 }, 00:14:07.083 { 00:14:07.083 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:07.083 "subtype": "NVMe", 00:14:07.083 "listen_addresses": [ 00:14:07.083 { 00:14:07.083 "trtype": "VFIOUSER", 00:14:07.083 "adrfam": "IPv4", 00:14:07.083 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:07.083 "trsvcid": "0" 00:14:07.083 } 00:14:07.083 ], 00:14:07.083 "allow_any_host": true, 00:14:07.083 "hosts": [], 00:14:07.083 "serial_number": "SPDK1", 00:14:07.083 "model_number": "SPDK bdev Controller", 00:14:07.083 "max_namespaces": 32, 00:14:07.083 "min_cntlid": 1, 00:14:07.083 "max_cntlid": 65519, 00:14:07.083 "namespaces": [ 00:14:07.083 { 00:14:07.083 "nsid": 1, 00:14:07.083 "bdev_name": "Malloc1", 00:14:07.083 "name": "Malloc1", 00:14:07.083 "nguid": "4B2809CBAA3A446ABC9964338BEA352E", 00:14:07.083 "uuid": "4b2809cb-aa3a-446a-bc99-64338bea352e" 00:14:07.083 }, 00:14:07.083 { 00:14:07.083 "nsid": 2, 00:14:07.083 "bdev_name": "Malloc3", 00:14:07.083 "name": "Malloc3", 00:14:07.083 "nguid": "738757497C0C4F0D9376F6C2C4E232AB", 00:14:07.083 "uuid": "73875749-7c0c-4f0d-9376-f6c2c4e232ab" 00:14:07.083 } 00:14:07.083 ] 00:14:07.083 }, 00:14:07.083 { 00:14:07.083 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:07.083 "subtype": "NVMe", 00:14:07.083 "listen_addresses": [ 00:14:07.083 { 00:14:07.083 "trtype": "VFIOUSER", 00:14:07.083 "adrfam": "IPv4", 00:14:07.083 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:07.083 "trsvcid": "0" 00:14:07.083 } 00:14:07.083 ], 00:14:07.083 "allow_any_host": true, 00:14:07.083 "hosts": [], 00:14:07.083 "serial_number": "SPDK2", 00:14:07.083 "model_number": "SPDK bdev Controller", 00:14:07.083 "max_namespaces": 32, 00:14:07.083 "min_cntlid": 1, 00:14:07.083 "max_cntlid": 65519, 00:14:07.083 "namespaces": [ 00:14:07.083 { 00:14:07.083 "nsid": 1, 00:14:07.083 "bdev_name": "Malloc2", 00:14:07.083 "name": "Malloc2", 00:14:07.083 "nguid": "35D3B09687804F9B9A0BA7C60825904E", 00:14:07.083 "uuid": "35d3b096-8780-4f9b-9a0b-a7c60825904e" 00:14:07.083 }, 00:14:07.083 { 00:14:07.083 "nsid": 2, 00:14:07.083 "bdev_name": "Malloc4", 00:14:07.083 "name": "Malloc4", 00:14:07.083 "nguid": "C482D4F7245448B88752D3149674C0A5", 00:14:07.083 "uuid": "c482d4f7-2454-48b8-8752-d3149674c0a5" 00:14:07.083 } 00:14:07.083 ] 00:14:07.083 } 00:14:07.083 ] 00:14:07.083 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1486056 00:14:07.083 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:07.083 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1478084 00:14:07.083 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1478084 ']' 00:14:07.083 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1478084 00:14:07.083 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:07.083 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.083 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1478084 00:14:07.342 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.342 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.342 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1478084' 00:14:07.342 killing process with pid 1478084 00:14:07.342 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1478084 00:14:07.342 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1478084 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1486252 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1486252' 00:14:07.343 Process pid: 1486252 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1486252 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1486252 ']' 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.343 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:07.603 [2024-12-12 10:27:41.404047] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:07.603 [2024-12-12 10:27:41.404886] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:14:07.603 [2024-12-12 10:27:41.404922] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.603 [2024-12-12 10:27:41.478958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.603 [2024-12-12 10:27:41.520196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.603 [2024-12-12 10:27:41.520234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.603 [2024-12-12 10:27:41.520241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.603 [2024-12-12 10:27:41.520247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.603 [2024-12-12 10:27:41.520252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.603 [2024-12-12 10:27:41.521665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.603 [2024-12-12 10:27:41.521773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.603 [2024-12-12 10:27:41.521879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.603 [2024-12-12 10:27:41.521881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.603 [2024-12-12 10:27:41.589678] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:07.603 [2024-12-12 10:27:41.590836] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:07.603 [2024-12-12 10:27:41.590840] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:07.603 [2024-12-12 10:27:41.591205] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:07.603 [2024-12-12 10:27:41.591253] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:07.603 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.603 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:07.603 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:08.981 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:08.981 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:08.981 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:08.981 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:08.981 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:08.981 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:09.240 Malloc1 00:14:09.240 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:09.240 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:09.498 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:09.757 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:09.757 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:09.757 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:10.016 Malloc2 00:14:10.016 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:10.274 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:10.274 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1486252 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1486252 ']' 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1486252 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486252 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486252' 00:14:10.533 killing process with pid 1486252 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1486252 00:14:10.533 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1486252 00:14:10.792 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:10.792 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:10.792 00:14:10.792 real 0m50.733s 00:14:10.792 user 3m16.291s 00:14:10.792 sys 0m3.199s 00:14:10.792 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.792 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:10.792 ************************************ 00:14:10.792 END TEST nvmf_vfio_user 00:14:10.792 ************************************ 00:14:10.792 10:27:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:10.792 10:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:10.792 10:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.792 10:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.792 ************************************ 00:14:10.792 START TEST nvmf_vfio_user_nvme_compliance 00:14:10.792 ************************************ 00:14:10.792 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:11.051 * Looking for test storage... 00:14:11.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.051 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:11.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.052 --rc genhtml_branch_coverage=1 00:14:11.052 --rc genhtml_function_coverage=1 00:14:11.052 --rc genhtml_legend=1 00:14:11.052 --rc geninfo_all_blocks=1 00:14:11.052 --rc geninfo_unexecuted_blocks=1 00:14:11.052 00:14:11.052 ' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:11.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.052 --rc genhtml_branch_coverage=1 00:14:11.052 --rc genhtml_function_coverage=1 00:14:11.052 --rc genhtml_legend=1 00:14:11.052 --rc geninfo_all_blocks=1 00:14:11.052 --rc geninfo_unexecuted_blocks=1 00:14:11.052 00:14:11.052 ' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:11.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.052 --rc genhtml_branch_coverage=1 00:14:11.052 --rc genhtml_function_coverage=1 00:14:11.052 --rc genhtml_legend=1 00:14:11.052 --rc geninfo_all_blocks=1 00:14:11.052 --rc geninfo_unexecuted_blocks=1 00:14:11.052 00:14:11.052 ' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:11.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.052 --rc genhtml_branch_coverage=1 00:14:11.052 --rc genhtml_function_coverage=1 00:14:11.052 --rc genhtml_legend=1 00:14:11.052 --rc geninfo_all_blocks=1 00:14:11.052 --rc geninfo_unexecuted_blocks=1 00:14:11.052 00:14:11.052 ' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1486991 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1486991' 00:14:11.052 Process pid: 1486991 00:14:11.052 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:11.053 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:11.053 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1486991 00:14:11.053 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1486991 ']' 00:14:11.053 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.053 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.053 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.053 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.053 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:11.053 [2024-12-12 10:27:45.014202] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:14:11.053 [2024-12-12 10:27:45.014251] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.310 [2024-12-12 10:27:45.090078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.310 [2024-12-12 10:27:45.131141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.310 [2024-12-12 10:27:45.131176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.310 [2024-12-12 10:27:45.131183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.310 [2024-12-12 10:27:45.131189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.310 [2024-12-12 10:27:45.131194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.310 [2024-12-12 10:27:45.132519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.310 [2024-12-12 10:27:45.132623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.310 [2024-12-12 10:27:45.132625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.310 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.310 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:11.310 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.247 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.506 malloc0 00:14:12.506 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.507 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:12.507 00:14:12.507 00:14:12.507 CUnit - A unit testing framework for C - Version 2.1-3 00:14:12.507 http://cunit.sourceforge.net/ 00:14:12.507 00:14:12.507 00:14:12.507 Suite: nvme_compliance 00:14:12.507 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-12 10:27:46.476033] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.507 [2024-12-12 10:27:46.477380] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:12.507 [2024-12-12 10:27:46.477394] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:12.507 [2024-12-12 10:27:46.477399] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:12.507 [2024-12-12 10:27:46.479055] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.507 passed 00:14:12.765 Test: admin_identify_ctrlr_verify_fused ...[2024-12-12 10:27:46.554603] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.765 [2024-12-12 10:27:46.557630] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.765 passed 00:14:12.765 Test: admin_identify_ns ...[2024-12-12 10:27:46.636443] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.766 [2024-12-12 10:27:46.696579] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:12.766 [2024-12-12 10:27:46.704584] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:12.766 [2024-12-12 10:27:46.725663] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.766 passed 00:14:13.023 Test: admin_get_features_mandatory_features ...[2024-12-12 10:27:46.802434] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.023 [2024-12-12 10:27:46.805463] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.023 passed 00:14:13.023 Test: admin_get_features_optional_features ...[2024-12-12 10:27:46.883013] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.023 [2024-12-12 10:27:46.886033] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.023 passed 00:14:13.023 Test: admin_set_features_number_of_queues ...[2024-12-12 10:27:46.961870] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.281 [2024-12-12 10:27:47.070661] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.281 passed 00:14:13.281 Test: admin_get_log_page_mandatory_logs ...[2024-12-12 10:27:47.143374] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.281 [2024-12-12 10:27:47.146392] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.281 passed 00:14:13.281 Test: admin_get_log_page_with_lpo ...[2024-12-12 10:27:47.226099] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.281 [2024-12-12 10:27:47.294577] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:13.540 [2024-12-12 10:27:47.307650] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.540 passed 00:14:13.540 Test: fabric_property_get ...[2024-12-12 10:27:47.380689] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.540 [2024-12-12 10:27:47.381928] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:13.540 [2024-12-12 10:27:47.383708] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.540 passed 00:14:13.540 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-12 10:27:47.460211] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.540 [2024-12-12 10:27:47.461444] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:13.540 [2024-12-12 10:27:47.464242] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.540 passed 00:14:13.540 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-12 10:27:47.539860] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.798 [2024-12-12 10:27:47.627581] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:13.798 [2024-12-12 10:27:47.643574] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:13.798 [2024-12-12 10:27:47.648667] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.798 passed 00:14:13.798 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-12 10:27:47.722487] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.798 [2024-12-12 10:27:47.723711] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:13.798 [2024-12-12 10:27:47.725503] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.798 passed 00:14:13.798 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-12 10:27:47.802236] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.056 [2024-12-12 10:27:47.878579] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:14.056 [2024-12-12 10:27:47.902582] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:14.056 [2024-12-12 10:27:47.907649] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.056 passed 00:14:14.056 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-12 10:27:47.983322] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.056 [2024-12-12 10:27:47.984571] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:14.056 [2024-12-12 10:27:47.984596] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:14.056 [2024-12-12 10:27:47.986344] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.056 passed 00:14:14.056 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-12 10:27:48.058879] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.315 [2024-12-12 10:27:48.150583] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:14.315 [2024-12-12 10:27:48.158581] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:14.315 [2024-12-12 10:27:48.166583] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:14.315 [2024-12-12 10:27:48.177577] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:14.315 [2024-12-12 10:27:48.206663] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.315 passed 00:14:14.315 Test: admin_create_io_sq_verify_pc ...[2024-12-12 10:27:48.280428] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.315 [2024-12-12 10:27:48.295583] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:14.315 [2024-12-12 10:27:48.313623] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.574 passed 00:14:14.574 Test: admin_create_io_qp_max_qps ...[2024-12-12 10:27:48.391175] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.510 [2024-12-12 10:27:49.494579] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:16.077 [2024-12-12 10:27:49.882639] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.077 passed 00:14:16.077 Test: admin_create_io_sq_shared_cq ...[2024-12-12 10:27:49.956649] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.077 [2024-12-12 10:27:50.090583] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:16.336 [2024-12-12 10:27:50.127652] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.336 passed 00:14:16.336 00:14:16.336 Run Summary: Type Total Ran Passed Failed Inactive 00:14:16.336 suites 1 1 n/a 0 0 00:14:16.336 tests 18 18 18 0 0 00:14:16.336 asserts 360 360 360 0 n/a 00:14:16.336 00:14:16.336 Elapsed time = 1.502 seconds 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1486991 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1486991 ']' 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1486991 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486991 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486991' 00:14:16.336 killing process with pid 1486991 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1486991 00:14:16.336 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1486991 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:16.595 00:14:16.595 real 0m5.654s 00:14:16.595 user 0m15.808s 00:14:16.595 sys 0m0.502s 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:16.595 ************************************ 00:14:16.595 END TEST nvmf_vfio_user_nvme_compliance 00:14:16.595 ************************************ 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.595 ************************************ 00:14:16.595 START TEST nvmf_vfio_user_fuzz 00:14:16.595 ************************************ 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:16.595 * Looking for test storage... 00:14:16.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:16.595 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.855 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:16.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.856 --rc genhtml_branch_coverage=1 00:14:16.856 --rc genhtml_function_coverage=1 00:14:16.856 --rc genhtml_legend=1 00:14:16.856 --rc geninfo_all_blocks=1 00:14:16.856 --rc geninfo_unexecuted_blocks=1 00:14:16.856 00:14:16.856 ' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:16.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.856 --rc genhtml_branch_coverage=1 00:14:16.856 --rc genhtml_function_coverage=1 00:14:16.856 --rc genhtml_legend=1 00:14:16.856 --rc geninfo_all_blocks=1 00:14:16.856 --rc geninfo_unexecuted_blocks=1 00:14:16.856 00:14:16.856 ' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:16.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.856 --rc genhtml_branch_coverage=1 00:14:16.856 --rc genhtml_function_coverage=1 00:14:16.856 --rc genhtml_legend=1 00:14:16.856 --rc geninfo_all_blocks=1 00:14:16.856 --rc geninfo_unexecuted_blocks=1 00:14:16.856 00:14:16.856 ' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:16.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.856 --rc genhtml_branch_coverage=1 00:14:16.856 --rc genhtml_function_coverage=1 00:14:16.856 --rc genhtml_legend=1 00:14:16.856 --rc geninfo_all_blocks=1 00:14:16.856 --rc geninfo_unexecuted_blocks=1 00:14:16.856 00:14:16.856 ' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1487955 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1487955' 00:14:16.856 Process pid: 1487955 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1487955 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1487955 ']' 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.856 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:17.115 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.116 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:17.116 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.051 malloc0 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.051 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.051 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.051 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:18.051 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.051 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.051 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.051 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:18.051 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:50.130 Fuzzing completed. Shutting down the fuzz application 00:14:50.130 00:14:50.130 Dumping successful admin opcodes: 00:14:50.130 9, 10, 00:14:50.130 Dumping successful io opcodes: 00:14:50.130 0, 00:14:50.130 NS: 0x20000081ef00 I/O qp, Total commands completed: 1140243, total successful commands: 4493, random_seed: 1731336448 00:14:50.130 NS: 0x20000081ef00 admin qp, Total commands completed: 280896, total successful commands: 65, random_seed: 417833792 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1487955 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1487955 ']' 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1487955 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1487955 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1487955' 00:14:50.130 killing process with pid 1487955 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1487955 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1487955 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:50.130 00:14:50.130 real 0m32.230s 00:14:50.130 user 0m33.996s 00:14:50.130 sys 0m26.960s 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:50.130 ************************************ 00:14:50.130 END TEST nvmf_vfio_user_fuzz 00:14:50.130 ************************************ 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.130 ************************************ 00:14:50.130 START TEST nvmf_auth_target 00:14:50.130 ************************************ 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:50.130 * Looking for test storage... 00:14:50.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:50.130 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:50.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.131 --rc genhtml_branch_coverage=1 00:14:50.131 --rc genhtml_function_coverage=1 00:14:50.131 --rc genhtml_legend=1 00:14:50.131 --rc geninfo_all_blocks=1 00:14:50.131 --rc geninfo_unexecuted_blocks=1 00:14:50.131 00:14:50.131 ' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:50.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.131 --rc genhtml_branch_coverage=1 00:14:50.131 --rc genhtml_function_coverage=1 00:14:50.131 --rc genhtml_legend=1 00:14:50.131 --rc geninfo_all_blocks=1 00:14:50.131 --rc geninfo_unexecuted_blocks=1 00:14:50.131 00:14:50.131 ' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:50.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.131 --rc genhtml_branch_coverage=1 00:14:50.131 --rc genhtml_function_coverage=1 00:14:50.131 --rc genhtml_legend=1 00:14:50.131 --rc geninfo_all_blocks=1 00:14:50.131 --rc geninfo_unexecuted_blocks=1 00:14:50.131 00:14:50.131 ' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:50.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.131 --rc genhtml_branch_coverage=1 00:14:50.131 --rc genhtml_function_coverage=1 00:14:50.131 --rc genhtml_legend=1 00:14:50.131 --rc geninfo_all_blocks=1 00:14:50.131 --rc geninfo_unexecuted_blocks=1 00:14:50.131 00:14:50.131 ' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.131 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:55.402 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:55.402 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:55.402 Found net devices under 0000:af:00.0: cvl_0_0 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:55.402 Found net devices under 0000:af:00.1: cvl_0_1 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:55.402 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:55.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:14:55.403 00:14:55.403 --- 10.0.0.2 ping statistics --- 00:14:55.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.403 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:55.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:14:55.403 00:14:55.403 --- 10.0.0.1 ping statistics --- 00:14:55.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.403 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1496115 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1496115 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1496115 ']' 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.403 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1496316 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2922686ac9d4cb3ef48d5ca7583f6e8843b1fd8c60f5457c 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jYA 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2922686ac9d4cb3ef48d5ca7583f6e8843b1fd8c60f5457c 0 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2922686ac9d4cb3ef48d5ca7583f6e8843b1fd8c60f5457c 0 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2922686ac9d4cb3ef48d5ca7583f6e8843b1fd8c60f5457c 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jYA 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jYA 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.jYA 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eff1351dcf993369dffb1ab656a380d08b16f03c03c7cdd5d10e769c455db0fe 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8t1 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eff1351dcf993369dffb1ab656a380d08b16f03c03c7cdd5d10e769c455db0fe 3 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eff1351dcf993369dffb1ab656a380d08b16f03c03c7cdd5d10e769c455db0fe 3 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eff1351dcf993369dffb1ab656a380d08b16f03c03c7cdd5d10e769c455db0fe 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8t1 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8t1 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.8t1 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=38eee00a02975c217e86e61453b845c7 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Z75 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 38eee00a02975c217e86e61453b845c7 1 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 38eee00a02975c217e86e61453b845c7 1 00:14:55.403 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:55.404 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:55.404 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=38eee00a02975c217e86e61453b845c7 00:14:55.404 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:55.404 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Z75 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Z75 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Z75 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=41e1673f5f33e22d8224cc8e75cd911e76d2cc7584f7bd3f 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hNz 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 41e1673f5f33e22d8224cc8e75cd911e76d2cc7584f7bd3f 2 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 41e1673f5f33e22d8224cc8e75cd911e76d2cc7584f7bd3f 2 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=41e1673f5f33e22d8224cc8e75cd911e76d2cc7584f7bd3f 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hNz 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hNz 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.hNz 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8dd6f739c99894774428c7db3a6ddb5bdf90a7195529f4c5 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.G8U 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8dd6f739c99894774428c7db3a6ddb5bdf90a7195529f4c5 2 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8dd6f739c99894774428c7db3a6ddb5bdf90a7195529f4c5 2 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8dd6f739c99894774428c7db3a6ddb5bdf90a7195529f4c5 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.G8U 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.G8U 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.G8U 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3343c4c8dd34511f66884b634a5e99d1 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.moV 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3343c4c8dd34511f66884b634a5e99d1 1 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3343c4c8dd34511f66884b634a5e99d1 1 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3343c4c8dd34511f66884b634a5e99d1 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.moV 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.moV 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.moV 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6f6631d0d183796726b033a73e9417cc03e6504cb19ee56344ea1ff0c825eaa3 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jWN 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6f6631d0d183796726b033a73e9417cc03e6504cb19ee56344ea1ff0c825eaa3 3 00:14:55.663 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6f6631d0d183796726b033a73e9417cc03e6504cb19ee56344ea1ff0c825eaa3 3 00:14:55.664 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:55.664 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:55.664 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6f6631d0d183796726b033a73e9417cc03e6504cb19ee56344ea1ff0c825eaa3 00:14:55.664 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:55.664 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jWN 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jWN 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.jWN 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1496115 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1496115 ']' 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1496316 /var/tmp/host.sock 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1496316 ']' 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:55.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.922 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jYA 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.jYA 00:14:56.180 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.jYA 00:14:56.437 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.8t1 ]] 00:14:56.437 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8t1 00:14:56.437 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.437 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.437 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.437 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8t1 00:14:56.437 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8t1 00:14:56.696 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:56.696 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Z75 00:14:56.696 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.696 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.696 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.696 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Z75 00:14:56.696 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Z75 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.hNz ]] 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hNz 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hNz 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hNz 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.G8U 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.954 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.211 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.211 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.G8U 00:14:57.211 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.G8U 00:14:57.211 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.moV ]] 00:14:57.211 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.moV 00:14:57.211 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.211 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.211 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.211 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.moV 00:14:57.211 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.moV 00:14:57.469 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:57.469 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jWN 00:14:57.469 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.469 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.469 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.469 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.jWN 00:14:57.469 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.jWN 00:14:57.726 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:57.726 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:57.726 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.726 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.726 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:57.726 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.984 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.288 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.288 { 00:14:58.288 "cntlid": 1, 00:14:58.288 "qid": 0, 00:14:58.288 "state": "enabled", 00:14:58.288 "thread": "nvmf_tgt_poll_group_000", 00:14:58.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:58.288 "listen_address": { 00:14:58.288 "trtype": "TCP", 00:14:58.288 "adrfam": "IPv4", 00:14:58.288 "traddr": "10.0.0.2", 00:14:58.288 "trsvcid": "4420" 00:14:58.288 }, 00:14:58.288 "peer_address": { 00:14:58.288 "trtype": "TCP", 00:14:58.288 "adrfam": "IPv4", 00:14:58.288 "traddr": "10.0.0.1", 00:14:58.288 "trsvcid": "57234" 00:14:58.288 }, 00:14:58.288 "auth": { 00:14:58.288 "state": "completed", 00:14:58.288 "digest": "sha256", 00:14:58.288 "dhgroup": "null" 00:14:58.288 } 00:14:58.288 } 00:14:58.288 ]' 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.288 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.566 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:58.566 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.566 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.566 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.566 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.566 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:14:58.566 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:14:59.151 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.151 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:59.151 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.151 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.151 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.151 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.151 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:59.151 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.409 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.667 00:14:59.667 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.667 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.667 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.925 { 00:14:59.925 "cntlid": 3, 00:14:59.925 "qid": 0, 00:14:59.925 "state": "enabled", 00:14:59.925 "thread": "nvmf_tgt_poll_group_000", 00:14:59.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:59.925 "listen_address": { 00:14:59.925 "trtype": "TCP", 00:14:59.925 "adrfam": "IPv4", 00:14:59.925 "traddr": "10.0.0.2", 00:14:59.925 "trsvcid": "4420" 00:14:59.925 }, 00:14:59.925 "peer_address": { 00:14:59.925 "trtype": "TCP", 00:14:59.925 "adrfam": "IPv4", 00:14:59.925 "traddr": "10.0.0.1", 00:14:59.925 "trsvcid": "57268" 00:14:59.925 }, 00:14:59.925 "auth": { 00:14:59.925 "state": "completed", 00:14:59.925 "digest": "sha256", 00:14:59.925 "dhgroup": "null" 00:14:59.925 } 00:14:59.925 } 00:14:59.925 ]' 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.925 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.183 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:00.183 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:00.750 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.750 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:00.750 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.750 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.750 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.750 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.750 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:00.750 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.009 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.271 00:15:01.271 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.271 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.271 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.529 { 00:15:01.529 "cntlid": 5, 00:15:01.529 "qid": 0, 00:15:01.529 "state": "enabled", 00:15:01.529 "thread": "nvmf_tgt_poll_group_000", 00:15:01.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:01.529 "listen_address": { 00:15:01.529 "trtype": "TCP", 00:15:01.529 "adrfam": "IPv4", 00:15:01.529 "traddr": "10.0.0.2", 00:15:01.529 "trsvcid": "4420" 00:15:01.529 }, 00:15:01.529 "peer_address": { 00:15:01.529 "trtype": "TCP", 00:15:01.529 "adrfam": "IPv4", 00:15:01.529 "traddr": "10.0.0.1", 00:15:01.529 "trsvcid": "57282" 00:15:01.529 }, 00:15:01.529 "auth": { 00:15:01.529 "state": "completed", 00:15:01.529 "digest": "sha256", 00:15:01.529 "dhgroup": "null" 00:15:01.529 } 00:15:01.529 } 00:15:01.529 ]' 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.787 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:01.787 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:02.352 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.352 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:02.352 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.352 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.352 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.352 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.353 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.353 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.610 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:02.610 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.610 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.610 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:02.610 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:02.610 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.611 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:02.611 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.611 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.611 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.611 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.611 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.611 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.869 00:15:02.869 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.869 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.869 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.127 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.127 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.127 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.127 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.127 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.127 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.127 { 00:15:03.127 "cntlid": 7, 00:15:03.127 "qid": 0, 00:15:03.127 "state": "enabled", 00:15:03.127 "thread": "nvmf_tgt_poll_group_000", 00:15:03.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:03.127 "listen_address": { 00:15:03.127 "trtype": "TCP", 00:15:03.127 "adrfam": "IPv4", 00:15:03.127 "traddr": "10.0.0.2", 00:15:03.127 "trsvcid": "4420" 00:15:03.127 }, 00:15:03.127 "peer_address": { 00:15:03.127 "trtype": "TCP", 00:15:03.127 "adrfam": "IPv4", 00:15:03.127 "traddr": "10.0.0.1", 00:15:03.127 "trsvcid": "57306" 00:15:03.127 }, 00:15:03.127 "auth": { 00:15:03.127 "state": "completed", 00:15:03.127 "digest": "sha256", 00:15:03.127 "dhgroup": "null" 00:15:03.127 } 00:15:03.127 } 00:15:03.127 ]' 00:15:03.127 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.127 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.127 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.127 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.127 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.127 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.127 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.127 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.385 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:03.385 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:03.951 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.951 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:03.951 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.951 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.951 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.951 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.951 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.951 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:03.951 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.210 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.468 00:15:04.468 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.468 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.468 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.468 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.468 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.468 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.468 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.727 { 00:15:04.727 "cntlid": 9, 00:15:04.727 "qid": 0, 00:15:04.727 "state": "enabled", 00:15:04.727 "thread": "nvmf_tgt_poll_group_000", 00:15:04.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:04.727 "listen_address": { 00:15:04.727 "trtype": "TCP", 00:15:04.727 "adrfam": "IPv4", 00:15:04.727 "traddr": "10.0.0.2", 00:15:04.727 "trsvcid": "4420" 00:15:04.727 }, 00:15:04.727 "peer_address": { 00:15:04.727 "trtype": "TCP", 00:15:04.727 "adrfam": "IPv4", 00:15:04.727 "traddr": "10.0.0.1", 00:15:04.727 "trsvcid": "57334" 00:15:04.727 }, 00:15:04.727 "auth": { 00:15:04.727 "state": "completed", 00:15:04.727 "digest": "sha256", 00:15:04.727 "dhgroup": "ffdhe2048" 00:15:04.727 } 00:15:04.727 } 00:15:04.727 ]' 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.727 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.986 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:04.986 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:05.551 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.551 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:05.551 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.551 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.551 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.551 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.551 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:05.551 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.808 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.065 00:15:06.065 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.065 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.065 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.065 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.065 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.065 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.065 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.322 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.322 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.322 { 00:15:06.322 "cntlid": 11, 00:15:06.322 "qid": 0, 00:15:06.322 "state": "enabled", 00:15:06.322 "thread": "nvmf_tgt_poll_group_000", 00:15:06.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:06.322 "listen_address": { 00:15:06.322 "trtype": "TCP", 00:15:06.322 "adrfam": "IPv4", 00:15:06.322 "traddr": "10.0.0.2", 00:15:06.322 "trsvcid": "4420" 00:15:06.322 }, 00:15:06.322 "peer_address": { 00:15:06.322 "trtype": "TCP", 00:15:06.322 "adrfam": "IPv4", 00:15:06.322 "traddr": "10.0.0.1", 00:15:06.322 "trsvcid": "57368" 00:15:06.322 }, 00:15:06.322 "auth": { 00:15:06.322 "state": "completed", 00:15:06.322 "digest": "sha256", 00:15:06.322 "dhgroup": "ffdhe2048" 00:15:06.322 } 00:15:06.322 } 00:15:06.322 ]' 00:15:06.322 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.322 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.322 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.322 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:06.322 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.322 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.322 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.323 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.581 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:06.581 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:07.148 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.148 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:07.148 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.148 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.148 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.148 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.148 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:07.148 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:07.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:07.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:07.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.406 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.407 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.407 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.407 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.407 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.407 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.407 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.665 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.665 { 00:15:07.665 "cntlid": 13, 00:15:07.665 "qid": 0, 00:15:07.665 "state": "enabled", 00:15:07.665 "thread": "nvmf_tgt_poll_group_000", 00:15:07.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:07.665 "listen_address": { 00:15:07.665 "trtype": "TCP", 00:15:07.665 "adrfam": "IPv4", 00:15:07.665 "traddr": "10.0.0.2", 00:15:07.665 "trsvcid": "4420" 00:15:07.665 }, 00:15:07.665 "peer_address": { 00:15:07.665 "trtype": "TCP", 00:15:07.665 "adrfam": "IPv4", 00:15:07.665 "traddr": "10.0.0.1", 00:15:07.665 "trsvcid": "36710" 00:15:07.665 }, 00:15:07.665 "auth": { 00:15:07.665 "state": "completed", 00:15:07.665 "digest": "sha256", 00:15:07.665 "dhgroup": "ffdhe2048" 00:15:07.665 } 00:15:07.665 } 00:15:07.665 ]' 00:15:07.665 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.923 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.923 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.923 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.923 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.923 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.923 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.923 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.180 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:08.180 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.745 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.003 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.261 { 00:15:09.261 "cntlid": 15, 00:15:09.261 "qid": 0, 00:15:09.261 "state": "enabled", 00:15:09.261 "thread": "nvmf_tgt_poll_group_000", 00:15:09.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:09.261 "listen_address": { 00:15:09.261 "trtype": "TCP", 00:15:09.261 "adrfam": "IPv4", 00:15:09.261 "traddr": "10.0.0.2", 00:15:09.261 "trsvcid": "4420" 00:15:09.261 }, 00:15:09.261 "peer_address": { 00:15:09.261 "trtype": "TCP", 00:15:09.261 "adrfam": "IPv4", 00:15:09.261 "traddr": "10.0.0.1", 00:15:09.261 "trsvcid": "36728" 00:15:09.261 }, 00:15:09.261 "auth": { 00:15:09.261 "state": "completed", 00:15:09.261 "digest": "sha256", 00:15:09.261 "dhgroup": "ffdhe2048" 00:15:09.261 } 00:15:09.261 } 00:15:09.261 ]' 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.261 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.519 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.519 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.519 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.519 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.519 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.777 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:09.777 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.344 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.602 00:15:10.602 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.602 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.602 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.860 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.860 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.860 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.860 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.860 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.860 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.860 { 00:15:10.860 "cntlid": 17, 00:15:10.860 "qid": 0, 00:15:10.860 "state": "enabled", 00:15:10.860 "thread": "nvmf_tgt_poll_group_000", 00:15:10.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:10.860 "listen_address": { 00:15:10.860 "trtype": "TCP", 00:15:10.860 "adrfam": "IPv4", 00:15:10.860 "traddr": "10.0.0.2", 00:15:10.860 "trsvcid": "4420" 00:15:10.860 }, 00:15:10.860 "peer_address": { 00:15:10.860 "trtype": "TCP", 00:15:10.860 "adrfam": "IPv4", 00:15:10.860 "traddr": "10.0.0.1", 00:15:10.860 "trsvcid": "36742" 00:15:10.860 }, 00:15:10.860 "auth": { 00:15:10.860 "state": "completed", 00:15:10.860 "digest": "sha256", 00:15:10.860 "dhgroup": "ffdhe3072" 00:15:10.860 } 00:15:10.860 } 00:15:10.860 ]' 00:15:10.860 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.860 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.860 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.117 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:11.117 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.117 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.117 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.117 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.375 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:11.375 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.941 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.198 00:15:12.198 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.198 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.198 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.456 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.456 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.456 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.456 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.456 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.456 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.456 { 00:15:12.456 "cntlid": 19, 00:15:12.456 "qid": 0, 00:15:12.456 "state": "enabled", 00:15:12.456 "thread": "nvmf_tgt_poll_group_000", 00:15:12.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:12.456 "listen_address": { 00:15:12.456 "trtype": "TCP", 00:15:12.456 "adrfam": "IPv4", 00:15:12.456 "traddr": "10.0.0.2", 00:15:12.456 "trsvcid": "4420" 00:15:12.456 }, 00:15:12.456 "peer_address": { 00:15:12.456 "trtype": "TCP", 00:15:12.456 "adrfam": "IPv4", 00:15:12.456 "traddr": "10.0.0.1", 00:15:12.456 "trsvcid": "36754" 00:15:12.456 }, 00:15:12.456 "auth": { 00:15:12.456 "state": "completed", 00:15:12.456 "digest": "sha256", 00:15:12.456 "dhgroup": "ffdhe3072" 00:15:12.456 } 00:15:12.456 } 00:15:12.456 ]' 00:15:12.456 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.456 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.456 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.713 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:12.713 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.713 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.713 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.713 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.972 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:12.972 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:13.539 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.540 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.798 00:15:13.798 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.798 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.798 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.057 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.057 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.057 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.057 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.057 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.057 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.057 { 00:15:14.057 "cntlid": 21, 00:15:14.057 "qid": 0, 00:15:14.057 "state": "enabled", 00:15:14.057 "thread": "nvmf_tgt_poll_group_000", 00:15:14.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:14.057 "listen_address": { 00:15:14.057 "trtype": "TCP", 00:15:14.057 "adrfam": "IPv4", 00:15:14.057 "traddr": "10.0.0.2", 00:15:14.057 "trsvcid": "4420" 00:15:14.057 }, 00:15:14.057 "peer_address": { 00:15:14.057 "trtype": "TCP", 00:15:14.057 "adrfam": "IPv4", 00:15:14.057 "traddr": "10.0.0.1", 00:15:14.057 "trsvcid": "36796" 00:15:14.057 }, 00:15:14.057 "auth": { 00:15:14.057 "state": "completed", 00:15:14.057 "digest": "sha256", 00:15:14.057 "dhgroup": "ffdhe3072" 00:15:14.057 } 00:15:14.057 } 00:15:14.057 ]' 00:15:14.057 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.057 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.057 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.316 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.316 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.316 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.316 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.316 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.316 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:14.316 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:14.883 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.883 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:14.883 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.883 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.142 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.142 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.142 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.142 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.142 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.143 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.401 00:15:15.401 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.401 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.401 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.660 { 00:15:15.660 "cntlid": 23, 00:15:15.660 "qid": 0, 00:15:15.660 "state": "enabled", 00:15:15.660 "thread": "nvmf_tgt_poll_group_000", 00:15:15.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:15.660 "listen_address": { 00:15:15.660 "trtype": "TCP", 00:15:15.660 "adrfam": "IPv4", 00:15:15.660 "traddr": "10.0.0.2", 00:15:15.660 "trsvcid": "4420" 00:15:15.660 }, 00:15:15.660 "peer_address": { 00:15:15.660 "trtype": "TCP", 00:15:15.660 "adrfam": "IPv4", 00:15:15.660 "traddr": "10.0.0.1", 00:15:15.660 "trsvcid": "36816" 00:15:15.660 }, 00:15:15.660 "auth": { 00:15:15.660 "state": "completed", 00:15:15.660 "digest": "sha256", 00:15:15.660 "dhgroup": "ffdhe3072" 00:15:15.660 } 00:15:15.660 } 00:15:15.660 ]' 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.660 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.919 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.919 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.919 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.919 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.177 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:16.177 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.745 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.004 00:15:17.004 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.004 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.004 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.263 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.263 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.263 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.263 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.263 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.263 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.263 { 00:15:17.263 "cntlid": 25, 00:15:17.263 "qid": 0, 00:15:17.263 "state": "enabled", 00:15:17.263 "thread": "nvmf_tgt_poll_group_000", 00:15:17.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:17.263 "listen_address": { 00:15:17.263 "trtype": "TCP", 00:15:17.263 "adrfam": "IPv4", 00:15:17.263 "traddr": "10.0.0.2", 00:15:17.263 "trsvcid": "4420" 00:15:17.263 }, 00:15:17.263 "peer_address": { 00:15:17.263 "trtype": "TCP", 00:15:17.263 "adrfam": "IPv4", 00:15:17.263 "traddr": "10.0.0.1", 00:15:17.263 "trsvcid": "44402" 00:15:17.263 }, 00:15:17.263 "auth": { 00:15:17.263 "state": "completed", 00:15:17.263 "digest": "sha256", 00:15:17.263 "dhgroup": "ffdhe4096" 00:15:17.263 } 00:15:17.263 } 00:15:17.263 ]' 00:15:17.263 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.522 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.522 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.522 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.522 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.522 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.522 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.522 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.780 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:17.780 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.348 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.607 00:15:18.607 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.607 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.607 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.866 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.866 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.866 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.866 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.866 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.866 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.866 { 00:15:18.866 "cntlid": 27, 00:15:18.866 "qid": 0, 00:15:18.866 "state": "enabled", 00:15:18.866 "thread": "nvmf_tgt_poll_group_000", 00:15:18.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:18.866 "listen_address": { 00:15:18.866 "trtype": "TCP", 00:15:18.866 "adrfam": "IPv4", 00:15:18.866 "traddr": "10.0.0.2", 00:15:18.866 "trsvcid": "4420" 00:15:18.866 }, 00:15:18.866 "peer_address": { 00:15:18.866 "trtype": "TCP", 00:15:18.866 "adrfam": "IPv4", 00:15:18.866 "traddr": "10.0.0.1", 00:15:18.866 "trsvcid": "44412" 00:15:18.866 }, 00:15:18.866 "auth": { 00:15:18.866 "state": "completed", 00:15:18.866 "digest": "sha256", 00:15:18.866 "dhgroup": "ffdhe4096" 00:15:18.866 } 00:15:18.866 } 00:15:18.866 ]' 00:15:18.866 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.866 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.866 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.125 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.125 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.125 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.125 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.125 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.125 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:19.125 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:19.692 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.693 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:19.693 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.693 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.693 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.952 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.210 00:15:20.210 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.210 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.210 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.469 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.469 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.469 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.469 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.469 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.469 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.469 { 00:15:20.469 "cntlid": 29, 00:15:20.469 "qid": 0, 00:15:20.469 "state": "enabled", 00:15:20.469 "thread": "nvmf_tgt_poll_group_000", 00:15:20.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:20.469 "listen_address": { 00:15:20.469 "trtype": "TCP", 00:15:20.469 "adrfam": "IPv4", 00:15:20.469 "traddr": "10.0.0.2", 00:15:20.469 "trsvcid": "4420" 00:15:20.469 }, 00:15:20.469 "peer_address": { 00:15:20.469 "trtype": "TCP", 00:15:20.469 "adrfam": "IPv4", 00:15:20.469 "traddr": "10.0.0.1", 00:15:20.469 "trsvcid": "44432" 00:15:20.469 }, 00:15:20.469 "auth": { 00:15:20.469 "state": "completed", 00:15:20.469 "digest": "sha256", 00:15:20.469 "dhgroup": "ffdhe4096" 00:15:20.469 } 00:15:20.469 } 00:15:20.469 ]' 00:15:20.469 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.469 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.469 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.728 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:20.728 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.728 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.728 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.728 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.728 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:20.728 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:21.295 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.295 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:21.295 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.295 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.295 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.295 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.295 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.295 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.553 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:21.553 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.553 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.553 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:21.553 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:21.553 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.554 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:21.554 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.554 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.554 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.554 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:21.554 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.554 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.812 00:15:21.812 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.812 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.812 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.070 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.071 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.071 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.071 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.071 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.071 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.071 { 00:15:22.071 "cntlid": 31, 00:15:22.071 "qid": 0, 00:15:22.071 "state": "enabled", 00:15:22.071 "thread": "nvmf_tgt_poll_group_000", 00:15:22.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:22.071 "listen_address": { 00:15:22.071 "trtype": "TCP", 00:15:22.071 "adrfam": "IPv4", 00:15:22.071 "traddr": "10.0.0.2", 00:15:22.071 "trsvcid": "4420" 00:15:22.071 }, 00:15:22.071 "peer_address": { 00:15:22.071 "trtype": "TCP", 00:15:22.071 "adrfam": "IPv4", 00:15:22.071 "traddr": "10.0.0.1", 00:15:22.071 "trsvcid": "44458" 00:15:22.071 }, 00:15:22.071 "auth": { 00:15:22.071 "state": "completed", 00:15:22.071 "digest": "sha256", 00:15:22.071 "dhgroup": "ffdhe4096" 00:15:22.071 } 00:15:22.071 } 00:15:22.071 ]' 00:15:22.071 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.071 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.071 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.071 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.071 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.071 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.071 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.071 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.329 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:22.329 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:22.896 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.896 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:22.896 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.896 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.896 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.896 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.896 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.896 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:22.896 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.156 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.414 00:15:23.414 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.414 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.414 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.673 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.673 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.673 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.673 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.673 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.673 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.673 { 00:15:23.673 "cntlid": 33, 00:15:23.673 "qid": 0, 00:15:23.673 "state": "enabled", 00:15:23.673 "thread": "nvmf_tgt_poll_group_000", 00:15:23.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:23.673 "listen_address": { 00:15:23.673 "trtype": "TCP", 00:15:23.673 "adrfam": "IPv4", 00:15:23.673 "traddr": "10.0.0.2", 00:15:23.673 "trsvcid": "4420" 00:15:23.673 }, 00:15:23.673 "peer_address": { 00:15:23.673 "trtype": "TCP", 00:15:23.673 "adrfam": "IPv4", 00:15:23.673 "traddr": "10.0.0.1", 00:15:23.673 "trsvcid": "44496" 00:15:23.673 }, 00:15:23.673 "auth": { 00:15:23.673 "state": "completed", 00:15:23.673 "digest": "sha256", 00:15:23.673 "dhgroup": "ffdhe6144" 00:15:23.673 } 00:15:23.673 } 00:15:23.673 ]' 00:15:23.673 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.673 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.673 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.932 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:23.932 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.932 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.932 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.932 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.932 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:23.932 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:24.499 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.758 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.759 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.759 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.759 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.326 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.326 { 00:15:25.326 "cntlid": 35, 00:15:25.326 "qid": 0, 00:15:25.326 "state": "enabled", 00:15:25.326 "thread": "nvmf_tgt_poll_group_000", 00:15:25.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:25.326 "listen_address": { 00:15:25.326 "trtype": "TCP", 00:15:25.326 "adrfam": "IPv4", 00:15:25.326 "traddr": "10.0.0.2", 00:15:25.326 "trsvcid": "4420" 00:15:25.326 }, 00:15:25.326 "peer_address": { 00:15:25.326 "trtype": "TCP", 00:15:25.326 "adrfam": "IPv4", 00:15:25.326 "traddr": "10.0.0.1", 00:15:25.326 "trsvcid": "44526" 00:15:25.326 }, 00:15:25.326 "auth": { 00:15:25.326 "state": "completed", 00:15:25.326 "digest": "sha256", 00:15:25.326 "dhgroup": "ffdhe6144" 00:15:25.326 } 00:15:25.326 } 00:15:25.326 ]' 00:15:25.326 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.585 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.585 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.585 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:25.585 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.585 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.585 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.585 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.844 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:25.844 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.410 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.978 00:15:26.978 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.978 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.978 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.978 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.978 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.978 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.978 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.236 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.236 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.236 { 00:15:27.236 "cntlid": 37, 00:15:27.236 "qid": 0, 00:15:27.236 "state": "enabled", 00:15:27.236 "thread": "nvmf_tgt_poll_group_000", 00:15:27.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:27.236 "listen_address": { 00:15:27.236 "trtype": "TCP", 00:15:27.236 "adrfam": "IPv4", 00:15:27.236 "traddr": "10.0.0.2", 00:15:27.236 "trsvcid": "4420" 00:15:27.236 }, 00:15:27.236 "peer_address": { 00:15:27.236 "trtype": "TCP", 00:15:27.236 "adrfam": "IPv4", 00:15:27.236 "traddr": "10.0.0.1", 00:15:27.236 "trsvcid": "35980" 00:15:27.236 }, 00:15:27.236 "auth": { 00:15:27.236 "state": "completed", 00:15:27.236 "digest": "sha256", 00:15:27.236 "dhgroup": "ffdhe6144" 00:15:27.236 } 00:15:27.236 } 00:15:27.236 ]' 00:15:27.236 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.236 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.236 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.237 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:27.237 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.237 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.237 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.237 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.495 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:27.495 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:28.063 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.063 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:28.063 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.063 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.063 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.063 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.063 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.063 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.322 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.581 00:15:28.581 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.581 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.581 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.839 { 00:15:28.839 "cntlid": 39, 00:15:28.839 "qid": 0, 00:15:28.839 "state": "enabled", 00:15:28.839 "thread": "nvmf_tgt_poll_group_000", 00:15:28.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:28.839 "listen_address": { 00:15:28.839 "trtype": "TCP", 00:15:28.839 "adrfam": "IPv4", 00:15:28.839 "traddr": "10.0.0.2", 00:15:28.839 "trsvcid": "4420" 00:15:28.839 }, 00:15:28.839 "peer_address": { 00:15:28.839 "trtype": "TCP", 00:15:28.839 "adrfam": "IPv4", 00:15:28.839 "traddr": "10.0.0.1", 00:15:28.839 "trsvcid": "36002" 00:15:28.839 }, 00:15:28.839 "auth": { 00:15:28.839 "state": "completed", 00:15:28.839 "digest": "sha256", 00:15:28.839 "dhgroup": "ffdhe6144" 00:15:28.839 } 00:15:28.839 } 00:15:28.839 ]' 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.839 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.840 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.098 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:29.098 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:29.665 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.665 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:29.665 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.665 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.665 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.665 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.665 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.665 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:29.665 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.923 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.490 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.490 { 00:15:30.490 "cntlid": 41, 00:15:30.490 "qid": 0, 00:15:30.490 "state": "enabled", 00:15:30.490 "thread": "nvmf_tgt_poll_group_000", 00:15:30.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:30.490 "listen_address": { 00:15:30.490 "trtype": "TCP", 00:15:30.490 "adrfam": "IPv4", 00:15:30.490 "traddr": "10.0.0.2", 00:15:30.490 "trsvcid": "4420" 00:15:30.490 }, 00:15:30.490 "peer_address": { 00:15:30.490 "trtype": "TCP", 00:15:30.490 "adrfam": "IPv4", 00:15:30.490 "traddr": "10.0.0.1", 00:15:30.490 "trsvcid": "36026" 00:15:30.490 }, 00:15:30.490 "auth": { 00:15:30.490 "state": "completed", 00:15:30.490 "digest": "sha256", 00:15:30.490 "dhgroup": "ffdhe8192" 00:15:30.490 } 00:15:30.490 } 00:15:30.490 ]' 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.490 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.749 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:30.749 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.749 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.749 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.749 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.007 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:31.007 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:31.578 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.578 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:31.578 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.578 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.578 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.578 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.578 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.579 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.146 00:15:32.146 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.146 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.146 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.405 { 00:15:32.405 "cntlid": 43, 00:15:32.405 "qid": 0, 00:15:32.405 "state": "enabled", 00:15:32.405 "thread": "nvmf_tgt_poll_group_000", 00:15:32.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:32.405 "listen_address": { 00:15:32.405 "trtype": "TCP", 00:15:32.405 "adrfam": "IPv4", 00:15:32.405 "traddr": "10.0.0.2", 00:15:32.405 "trsvcid": "4420" 00:15:32.405 }, 00:15:32.405 "peer_address": { 00:15:32.405 "trtype": "TCP", 00:15:32.405 "adrfam": "IPv4", 00:15:32.405 "traddr": "10.0.0.1", 00:15:32.405 "trsvcid": "36044" 00:15:32.405 }, 00:15:32.405 "auth": { 00:15:32.405 "state": "completed", 00:15:32.405 "digest": "sha256", 00:15:32.405 "dhgroup": "ffdhe8192" 00:15:32.405 } 00:15:32.405 } 00:15:32.405 ]' 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.405 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.664 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:32.664 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:33.231 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.231 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:33.231 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.231 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.231 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.231 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.231 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.231 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.490 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.057 00:15:34.057 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.057 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.057 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.316 { 00:15:34.316 "cntlid": 45, 00:15:34.316 "qid": 0, 00:15:34.316 "state": "enabled", 00:15:34.316 "thread": "nvmf_tgt_poll_group_000", 00:15:34.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:34.316 "listen_address": { 00:15:34.316 "trtype": "TCP", 00:15:34.316 "adrfam": "IPv4", 00:15:34.316 "traddr": "10.0.0.2", 00:15:34.316 "trsvcid": "4420" 00:15:34.316 }, 00:15:34.316 "peer_address": { 00:15:34.316 "trtype": "TCP", 00:15:34.316 "adrfam": "IPv4", 00:15:34.316 "traddr": "10.0.0.1", 00:15:34.316 "trsvcid": "36082" 00:15:34.316 }, 00:15:34.316 "auth": { 00:15:34.316 "state": "completed", 00:15:34.316 "digest": "sha256", 00:15:34.316 "dhgroup": "ffdhe8192" 00:15:34.316 } 00:15:34.316 } 00:15:34.316 ]' 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.316 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.575 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:34.575 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:35.143 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.143 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:35.143 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.143 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.143 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.143 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.143 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.143 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.402 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.970 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.970 { 00:15:35.970 "cntlid": 47, 00:15:35.970 "qid": 0, 00:15:35.970 "state": "enabled", 00:15:35.970 "thread": "nvmf_tgt_poll_group_000", 00:15:35.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:35.970 "listen_address": { 00:15:35.970 "trtype": "TCP", 00:15:35.970 "adrfam": "IPv4", 00:15:35.970 "traddr": "10.0.0.2", 00:15:35.970 "trsvcid": "4420" 00:15:35.970 }, 00:15:35.970 "peer_address": { 00:15:35.970 "trtype": "TCP", 00:15:35.970 "adrfam": "IPv4", 00:15:35.970 "traddr": "10.0.0.1", 00:15:35.970 "trsvcid": "36122" 00:15:35.970 }, 00:15:35.970 "auth": { 00:15:35.970 "state": "completed", 00:15:35.970 "digest": "sha256", 00:15:35.970 "dhgroup": "ffdhe8192" 00:15:35.970 } 00:15:35.970 } 00:15:35.970 ]' 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.970 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.249 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:36.249 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.249 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.249 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.249 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.548 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:36.548 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:36.862 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.121 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.380 00:15:37.380 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.380 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.380 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.638 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.638 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.638 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.638 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.638 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.638 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.638 { 00:15:37.638 "cntlid": 49, 00:15:37.638 "qid": 0, 00:15:37.638 "state": "enabled", 00:15:37.638 "thread": "nvmf_tgt_poll_group_000", 00:15:37.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:37.638 "listen_address": { 00:15:37.638 "trtype": "TCP", 00:15:37.638 "adrfam": "IPv4", 00:15:37.638 "traddr": "10.0.0.2", 00:15:37.638 "trsvcid": "4420" 00:15:37.638 }, 00:15:37.638 "peer_address": { 00:15:37.638 "trtype": "TCP", 00:15:37.638 "adrfam": "IPv4", 00:15:37.638 "traddr": "10.0.0.1", 00:15:37.638 "trsvcid": "51852" 00:15:37.638 }, 00:15:37.638 "auth": { 00:15:37.638 "state": "completed", 00:15:37.638 "digest": "sha384", 00:15:37.638 "dhgroup": "null" 00:15:37.638 } 00:15:37.638 } 00:15:37.638 ]' 00:15:37.638 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.638 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.638 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.639 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:37.639 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.897 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.897 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.897 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.897 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:37.897 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:38.465 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.465 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:38.465 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.465 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.465 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.465 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.465 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:38.465 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.724 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.983 00:15:38.983 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.983 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.983 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.242 { 00:15:39.242 "cntlid": 51, 00:15:39.242 "qid": 0, 00:15:39.242 "state": "enabled", 00:15:39.242 "thread": "nvmf_tgt_poll_group_000", 00:15:39.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:39.242 "listen_address": { 00:15:39.242 "trtype": "TCP", 00:15:39.242 "adrfam": "IPv4", 00:15:39.242 "traddr": "10.0.0.2", 00:15:39.242 "trsvcid": "4420" 00:15:39.242 }, 00:15:39.242 "peer_address": { 00:15:39.242 "trtype": "TCP", 00:15:39.242 "adrfam": "IPv4", 00:15:39.242 "traddr": "10.0.0.1", 00:15:39.242 "trsvcid": "51866" 00:15:39.242 }, 00:15:39.242 "auth": { 00:15:39.242 "state": "completed", 00:15:39.242 "digest": "sha384", 00:15:39.242 "dhgroup": "null" 00:15:39.242 } 00:15:39.242 } 00:15:39.242 ]' 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.242 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.501 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:39.501 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:40.068 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.068 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:40.068 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.068 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.068 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.068 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.068 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:40.068 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.327 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.586 00:15:40.586 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.586 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.586 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.845 { 00:15:40.845 "cntlid": 53, 00:15:40.845 "qid": 0, 00:15:40.845 "state": "enabled", 00:15:40.845 "thread": "nvmf_tgt_poll_group_000", 00:15:40.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:40.845 "listen_address": { 00:15:40.845 "trtype": "TCP", 00:15:40.845 "adrfam": "IPv4", 00:15:40.845 "traddr": "10.0.0.2", 00:15:40.845 "trsvcid": "4420" 00:15:40.845 }, 00:15:40.845 "peer_address": { 00:15:40.845 "trtype": "TCP", 00:15:40.845 "adrfam": "IPv4", 00:15:40.845 "traddr": "10.0.0.1", 00:15:40.845 "trsvcid": "51888" 00:15:40.845 }, 00:15:40.845 "auth": { 00:15:40.845 "state": "completed", 00:15:40.845 "digest": "sha384", 00:15:40.845 "dhgroup": "null" 00:15:40.845 } 00:15:40.845 } 00:15:40.845 ]' 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.845 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:40.846 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.846 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.846 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.846 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.104 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:41.104 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:41.672 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.672 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:41.672 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.672 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.672 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.672 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.672 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.672 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.931 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.189 00:15:42.189 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.189 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.189 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.447 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.447 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.447 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.447 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.447 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.447 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.447 { 00:15:42.447 "cntlid": 55, 00:15:42.447 "qid": 0, 00:15:42.447 "state": "enabled", 00:15:42.447 "thread": "nvmf_tgt_poll_group_000", 00:15:42.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:42.447 "listen_address": { 00:15:42.447 "trtype": "TCP", 00:15:42.447 "adrfam": "IPv4", 00:15:42.447 "traddr": "10.0.0.2", 00:15:42.447 "trsvcid": "4420" 00:15:42.447 }, 00:15:42.447 "peer_address": { 00:15:42.447 "trtype": "TCP", 00:15:42.447 "adrfam": "IPv4", 00:15:42.447 "traddr": "10.0.0.1", 00:15:42.447 "trsvcid": "51928" 00:15:42.447 }, 00:15:42.447 "auth": { 00:15:42.447 "state": "completed", 00:15:42.447 "digest": "sha384", 00:15:42.447 "dhgroup": "null" 00:15:42.447 } 00:15:42.447 } 00:15:42.447 ]' 00:15:42.447 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.447 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.447 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.448 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:42.448 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.448 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.448 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.448 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.706 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:42.706 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:43.274 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.274 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:43.274 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.274 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.274 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.274 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.274 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.274 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.274 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.533 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.791 00:15:43.791 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.791 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.791 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.049 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.049 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.049 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.049 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.049 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.049 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.049 { 00:15:44.049 "cntlid": 57, 00:15:44.049 "qid": 0, 00:15:44.049 "state": "enabled", 00:15:44.049 "thread": "nvmf_tgt_poll_group_000", 00:15:44.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:44.049 "listen_address": { 00:15:44.049 "trtype": "TCP", 00:15:44.049 "adrfam": "IPv4", 00:15:44.049 "traddr": "10.0.0.2", 00:15:44.049 "trsvcid": "4420" 00:15:44.049 }, 00:15:44.049 "peer_address": { 00:15:44.049 "trtype": "TCP", 00:15:44.049 "adrfam": "IPv4", 00:15:44.050 "traddr": "10.0.0.1", 00:15:44.050 "trsvcid": "51968" 00:15:44.050 }, 00:15:44.050 "auth": { 00:15:44.050 "state": "completed", 00:15:44.050 "digest": "sha384", 00:15:44.050 "dhgroup": "ffdhe2048" 00:15:44.050 } 00:15:44.050 } 00:15:44.050 ]' 00:15:44.050 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.050 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.050 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.050 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.050 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.050 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.050 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.050 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.307 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:44.307 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:44.872 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.872 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:44.872 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.872 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.872 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.872 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.872 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:44.872 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:45.131 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:45.131 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.131 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.131 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:45.131 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.131 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.131 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.131 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.131 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.131 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.131 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.131 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.131 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.390 00:15:45.390 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.390 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.390 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.648 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.648 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.649 { 00:15:45.649 "cntlid": 59, 00:15:45.649 "qid": 0, 00:15:45.649 "state": "enabled", 00:15:45.649 "thread": "nvmf_tgt_poll_group_000", 00:15:45.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:45.649 "listen_address": { 00:15:45.649 "trtype": "TCP", 00:15:45.649 "adrfam": "IPv4", 00:15:45.649 "traddr": "10.0.0.2", 00:15:45.649 "trsvcid": "4420" 00:15:45.649 }, 00:15:45.649 "peer_address": { 00:15:45.649 "trtype": "TCP", 00:15:45.649 "adrfam": "IPv4", 00:15:45.649 "traddr": "10.0.0.1", 00:15:45.649 "trsvcid": "51996" 00:15:45.649 }, 00:15:45.649 "auth": { 00:15:45.649 "state": "completed", 00:15:45.649 "digest": "sha384", 00:15:45.649 "dhgroup": "ffdhe2048" 00:15:45.649 } 00:15:45.649 } 00:15:45.649 ]' 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.649 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.908 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:45.908 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:46.475 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.475 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:46.475 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.475 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.475 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.475 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.475 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.475 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.733 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.992 00:15:46.992 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.992 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.992 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.251 { 00:15:47.251 "cntlid": 61, 00:15:47.251 "qid": 0, 00:15:47.251 "state": "enabled", 00:15:47.251 "thread": "nvmf_tgt_poll_group_000", 00:15:47.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:47.251 "listen_address": { 00:15:47.251 "trtype": "TCP", 00:15:47.251 "adrfam": "IPv4", 00:15:47.251 "traddr": "10.0.0.2", 00:15:47.251 "trsvcid": "4420" 00:15:47.251 }, 00:15:47.251 "peer_address": { 00:15:47.251 "trtype": "TCP", 00:15:47.251 "adrfam": "IPv4", 00:15:47.251 "traddr": "10.0.0.1", 00:15:47.251 "trsvcid": "59716" 00:15:47.251 }, 00:15:47.251 "auth": { 00:15:47.251 "state": "completed", 00:15:47.251 "digest": "sha384", 00:15:47.251 "dhgroup": "ffdhe2048" 00:15:47.251 } 00:15:47.251 } 00:15:47.251 ]' 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.251 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.509 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:47.509 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:48.077 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.077 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:48.077 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.077 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.077 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.077 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.077 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.077 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.336 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.594 00:15:48.594 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.594 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.594 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.852 { 00:15:48.852 "cntlid": 63, 00:15:48.852 "qid": 0, 00:15:48.852 "state": "enabled", 00:15:48.852 "thread": "nvmf_tgt_poll_group_000", 00:15:48.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:48.852 "listen_address": { 00:15:48.852 "trtype": "TCP", 00:15:48.852 "adrfam": "IPv4", 00:15:48.852 "traddr": "10.0.0.2", 00:15:48.852 "trsvcid": "4420" 00:15:48.852 }, 00:15:48.852 "peer_address": { 00:15:48.852 "trtype": "TCP", 00:15:48.852 "adrfam": "IPv4", 00:15:48.852 "traddr": "10.0.0.1", 00:15:48.852 "trsvcid": "59744" 00:15:48.852 }, 00:15:48.852 "auth": { 00:15:48.852 "state": "completed", 00:15:48.852 "digest": "sha384", 00:15:48.852 "dhgroup": "ffdhe2048" 00:15:48.852 } 00:15:48.852 } 00:15:48.852 ]' 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:48.852 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.853 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.853 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.853 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.111 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:49.111 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:49.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:49.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.938 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.197 00:15:50.197 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.197 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.197 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.197 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.197 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.197 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.197 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.455 { 00:15:50.455 "cntlid": 65, 00:15:50.455 "qid": 0, 00:15:50.455 "state": "enabled", 00:15:50.455 "thread": "nvmf_tgt_poll_group_000", 00:15:50.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:50.455 "listen_address": { 00:15:50.455 "trtype": "TCP", 00:15:50.455 "adrfam": "IPv4", 00:15:50.455 "traddr": "10.0.0.2", 00:15:50.455 "trsvcid": "4420" 00:15:50.455 }, 00:15:50.455 "peer_address": { 00:15:50.455 "trtype": "TCP", 00:15:50.455 "adrfam": "IPv4", 00:15:50.455 "traddr": "10.0.0.1", 00:15:50.455 "trsvcid": "59778" 00:15:50.455 }, 00:15:50.455 "auth": { 00:15:50.455 "state": "completed", 00:15:50.455 "digest": "sha384", 00:15:50.455 "dhgroup": "ffdhe3072" 00:15:50.455 } 00:15:50.455 } 00:15:50.455 ]' 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.455 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.714 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:50.714 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:51.280 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.280 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:51.280 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.280 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.280 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.280 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.280 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.280 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.538 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.797 00:15:51.797 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.797 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.797 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.056 { 00:15:52.056 "cntlid": 67, 00:15:52.056 "qid": 0, 00:15:52.056 "state": "enabled", 00:15:52.056 "thread": "nvmf_tgt_poll_group_000", 00:15:52.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:52.056 "listen_address": { 00:15:52.056 "trtype": "TCP", 00:15:52.056 "adrfam": "IPv4", 00:15:52.056 "traddr": "10.0.0.2", 00:15:52.056 "trsvcid": "4420" 00:15:52.056 }, 00:15:52.056 "peer_address": { 00:15:52.056 "trtype": "TCP", 00:15:52.056 "adrfam": "IPv4", 00:15:52.056 "traddr": "10.0.0.1", 00:15:52.056 "trsvcid": "59790" 00:15:52.056 }, 00:15:52.056 "auth": { 00:15:52.056 "state": "completed", 00:15:52.056 "digest": "sha384", 00:15:52.056 "dhgroup": "ffdhe3072" 00:15:52.056 } 00:15:52.056 } 00:15:52.056 ]' 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.056 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.056 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.056 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.056 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.315 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:52.315 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:52.882 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.882 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:52.882 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.882 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.882 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.882 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.882 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:52.882 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.141 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.400 00:15:53.400 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.400 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.400 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.400 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.400 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.400 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.400 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.659 { 00:15:53.659 "cntlid": 69, 00:15:53.659 "qid": 0, 00:15:53.659 "state": "enabled", 00:15:53.659 "thread": "nvmf_tgt_poll_group_000", 00:15:53.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:53.659 "listen_address": { 00:15:53.659 "trtype": "TCP", 00:15:53.659 "adrfam": "IPv4", 00:15:53.659 "traddr": "10.0.0.2", 00:15:53.659 "trsvcid": "4420" 00:15:53.659 }, 00:15:53.659 "peer_address": { 00:15:53.659 "trtype": "TCP", 00:15:53.659 "adrfam": "IPv4", 00:15:53.659 "traddr": "10.0.0.1", 00:15:53.659 "trsvcid": "59816" 00:15:53.659 }, 00:15:53.659 "auth": { 00:15:53.659 "state": "completed", 00:15:53.659 "digest": "sha384", 00:15:53.659 "dhgroup": "ffdhe3072" 00:15:53.659 } 00:15:53.659 } 00:15:53.659 ]' 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.659 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.918 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:53.918 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:15:54.486 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.486 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:54.486 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.486 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.486 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.486 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.486 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:54.486 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.745 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.004 00:15:55.004 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.004 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.005 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.005 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.005 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.005 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.005 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.005 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.005 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.005 { 00:15:55.005 "cntlid": 71, 00:15:55.005 "qid": 0, 00:15:55.005 "state": "enabled", 00:15:55.005 "thread": "nvmf_tgt_poll_group_000", 00:15:55.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:55.005 "listen_address": { 00:15:55.005 "trtype": "TCP", 00:15:55.005 "adrfam": "IPv4", 00:15:55.005 "traddr": "10.0.0.2", 00:15:55.005 "trsvcid": "4420" 00:15:55.005 }, 00:15:55.005 "peer_address": { 00:15:55.005 "trtype": "TCP", 00:15:55.005 "adrfam": "IPv4", 00:15:55.005 "traddr": "10.0.0.1", 00:15:55.005 "trsvcid": "59854" 00:15:55.005 }, 00:15:55.005 "auth": { 00:15:55.005 "state": "completed", 00:15:55.005 "digest": "sha384", 00:15:55.005 "dhgroup": "ffdhe3072" 00:15:55.005 } 00:15:55.005 } 00:15:55.005 ]' 00:15:55.005 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.264 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.264 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.264 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.264 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.264 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.264 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.264 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.523 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:55.523 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:15:56.090 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.090 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:56.090 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.090 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.090 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.090 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.090 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.090 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.090 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.090 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.348 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.607 { 00:15:56.607 "cntlid": 73, 00:15:56.607 "qid": 0, 00:15:56.607 "state": "enabled", 00:15:56.607 "thread": "nvmf_tgt_poll_group_000", 00:15:56.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:56.607 "listen_address": { 00:15:56.607 "trtype": "TCP", 00:15:56.607 "adrfam": "IPv4", 00:15:56.607 "traddr": "10.0.0.2", 00:15:56.607 "trsvcid": "4420" 00:15:56.607 }, 00:15:56.607 "peer_address": { 00:15:56.607 "trtype": "TCP", 00:15:56.607 "adrfam": "IPv4", 00:15:56.607 "traddr": "10.0.0.1", 00:15:56.607 "trsvcid": "50490" 00:15:56.607 }, 00:15:56.607 "auth": { 00:15:56.607 "state": "completed", 00:15:56.607 "digest": "sha384", 00:15:56.607 "dhgroup": "ffdhe4096" 00:15:56.607 } 00:15:56.607 } 00:15:56.607 ]' 00:15:56.607 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.865 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.865 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.865 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.865 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.865 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.865 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.865 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.124 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:57.124 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:57.692 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.693 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.693 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.693 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.693 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.693 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.693 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.693 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.693 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.261 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.261 { 00:15:58.261 "cntlid": 75, 00:15:58.261 "qid": 0, 00:15:58.261 "state": "enabled", 00:15:58.261 "thread": "nvmf_tgt_poll_group_000", 00:15:58.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:58.261 "listen_address": { 00:15:58.261 "trtype": "TCP", 00:15:58.261 "adrfam": "IPv4", 00:15:58.261 "traddr": "10.0.0.2", 00:15:58.261 "trsvcid": "4420" 00:15:58.261 }, 00:15:58.261 "peer_address": { 00:15:58.261 "trtype": "TCP", 00:15:58.261 "adrfam": "IPv4", 00:15:58.261 "traddr": "10.0.0.1", 00:15:58.261 "trsvcid": "50520" 00:15:58.261 }, 00:15:58.261 "auth": { 00:15:58.261 "state": "completed", 00:15:58.261 "digest": "sha384", 00:15:58.261 "dhgroup": "ffdhe4096" 00:15:58.261 } 00:15:58.261 } 00:15:58.261 ]' 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.261 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.520 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.520 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.520 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.520 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.520 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.520 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:58.520 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:15:59.087 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.346 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.605 00:15:59.605 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.605 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.605 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.864 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.864 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.864 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.864 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.864 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.864 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.864 { 00:15:59.864 "cntlid": 77, 00:15:59.864 "qid": 0, 00:15:59.864 "state": "enabled", 00:15:59.864 "thread": "nvmf_tgt_poll_group_000", 00:15:59.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:59.864 "listen_address": { 00:15:59.864 "trtype": "TCP", 00:15:59.864 "adrfam": "IPv4", 00:15:59.864 "traddr": "10.0.0.2", 00:15:59.864 "trsvcid": "4420" 00:15:59.864 }, 00:15:59.864 "peer_address": { 00:15:59.864 "trtype": "TCP", 00:15:59.864 "adrfam": "IPv4", 00:15:59.864 "traddr": "10.0.0.1", 00:15:59.864 "trsvcid": "50554" 00:15:59.864 }, 00:15:59.864 "auth": { 00:15:59.864 "state": "completed", 00:15:59.864 "digest": "sha384", 00:15:59.864 "dhgroup": "ffdhe4096" 00:15:59.864 } 00:15:59.864 } 00:15:59.864 ]' 00:15:59.864 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.864 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.864 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.123 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.123 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.123 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.123 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.123 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.123 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:00.123 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:00.691 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.949 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.950 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:00.950 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.950 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.950 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.950 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.950 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.950 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.214 00:16:01.214 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.214 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.214 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.473 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.473 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.473 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.473 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.473 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.473 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.473 { 00:16:01.473 "cntlid": 79, 00:16:01.473 "qid": 0, 00:16:01.473 "state": "enabled", 00:16:01.473 "thread": "nvmf_tgt_poll_group_000", 00:16:01.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:01.473 "listen_address": { 00:16:01.473 "trtype": "TCP", 00:16:01.473 "adrfam": "IPv4", 00:16:01.473 "traddr": "10.0.0.2", 00:16:01.473 "trsvcid": "4420" 00:16:01.473 }, 00:16:01.473 "peer_address": { 00:16:01.473 "trtype": "TCP", 00:16:01.473 "adrfam": "IPv4", 00:16:01.473 "traddr": "10.0.0.1", 00:16:01.473 "trsvcid": "50582" 00:16:01.473 }, 00:16:01.473 "auth": { 00:16:01.473 "state": "completed", 00:16:01.473 "digest": "sha384", 00:16:01.473 "dhgroup": "ffdhe4096" 00:16:01.473 } 00:16:01.473 } 00:16:01.473 ]' 00:16:01.473 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.473 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.473 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.732 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.732 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.732 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.732 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.732 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.732 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:01.732 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:02.300 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.300 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:02.300 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.300 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.300 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.300 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.300 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.300 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.300 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.558 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.126 00:16:03.126 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.126 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.126 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.126 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.126 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.126 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.126 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.126 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.126 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.126 { 00:16:03.126 "cntlid": 81, 00:16:03.126 "qid": 0, 00:16:03.126 "state": "enabled", 00:16:03.126 "thread": "nvmf_tgt_poll_group_000", 00:16:03.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:03.126 "listen_address": { 00:16:03.126 "trtype": "TCP", 00:16:03.126 "adrfam": "IPv4", 00:16:03.126 "traddr": "10.0.0.2", 00:16:03.126 "trsvcid": "4420" 00:16:03.126 }, 00:16:03.126 "peer_address": { 00:16:03.126 "trtype": "TCP", 00:16:03.126 "adrfam": "IPv4", 00:16:03.126 "traddr": "10.0.0.1", 00:16:03.126 "trsvcid": "50592" 00:16:03.126 }, 00:16:03.126 "auth": { 00:16:03.126 "state": "completed", 00:16:03.126 "digest": "sha384", 00:16:03.126 "dhgroup": "ffdhe6144" 00:16:03.126 } 00:16:03.126 } 00:16:03.126 ]' 00:16:03.126 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.126 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.126 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.385 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.385 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.385 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.385 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.385 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.385 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:03.385 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:03.954 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.954 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:03.954 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.954 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.212 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.212 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.212 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.212 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.213 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.780 00:16:04.780 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.780 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.780 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.780 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.780 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.780 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.780 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.780 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.780 { 00:16:04.780 "cntlid": 83, 00:16:04.780 "qid": 0, 00:16:04.780 "state": "enabled", 00:16:04.780 "thread": "nvmf_tgt_poll_group_000", 00:16:04.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:04.780 "listen_address": { 00:16:04.780 "trtype": "TCP", 00:16:04.780 "adrfam": "IPv4", 00:16:04.780 "traddr": "10.0.0.2", 00:16:04.780 "trsvcid": "4420" 00:16:04.780 }, 00:16:04.780 "peer_address": { 00:16:04.780 "trtype": "TCP", 00:16:04.780 "adrfam": "IPv4", 00:16:04.780 "traddr": "10.0.0.1", 00:16:04.780 "trsvcid": "50632" 00:16:04.780 }, 00:16:04.780 "auth": { 00:16:04.780 "state": "completed", 00:16:04.780 "digest": "sha384", 00:16:04.780 "dhgroup": "ffdhe6144" 00:16:04.780 } 00:16:04.780 } 00:16:04.780 ]' 00:16:04.781 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.781 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.048 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.048 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.048 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.048 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.048 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.048 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.308 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:05.308 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.876 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.443 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.443 { 00:16:06.443 "cntlid": 85, 00:16:06.443 "qid": 0, 00:16:06.443 "state": "enabled", 00:16:06.443 "thread": "nvmf_tgt_poll_group_000", 00:16:06.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:06.443 "listen_address": { 00:16:06.443 "trtype": "TCP", 00:16:06.443 "adrfam": "IPv4", 00:16:06.443 "traddr": "10.0.0.2", 00:16:06.443 "trsvcid": "4420" 00:16:06.443 }, 00:16:06.443 "peer_address": { 00:16:06.443 "trtype": "TCP", 00:16:06.443 "adrfam": "IPv4", 00:16:06.443 "traddr": "10.0.0.1", 00:16:06.443 "trsvcid": "57812" 00:16:06.443 }, 00:16:06.443 "auth": { 00:16:06.443 "state": "completed", 00:16:06.443 "digest": "sha384", 00:16:06.443 "dhgroup": "ffdhe6144" 00:16:06.443 } 00:16:06.443 } 00:16:06.443 ]' 00:16:06.443 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.703 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.703 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.703 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.703 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.703 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.703 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.703 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.703 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:06.703 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:07.271 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.271 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:07.271 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.271 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.530 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.098 00:16:08.098 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.098 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.098 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.098 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.098 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.098 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.098 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.098 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.098 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.098 { 00:16:08.098 "cntlid": 87, 00:16:08.098 "qid": 0, 00:16:08.098 "state": "enabled", 00:16:08.098 "thread": "nvmf_tgt_poll_group_000", 00:16:08.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:08.098 "listen_address": { 00:16:08.098 "trtype": "TCP", 00:16:08.098 "adrfam": "IPv4", 00:16:08.098 "traddr": "10.0.0.2", 00:16:08.098 "trsvcid": "4420" 00:16:08.098 }, 00:16:08.098 "peer_address": { 00:16:08.098 "trtype": "TCP", 00:16:08.098 "adrfam": "IPv4", 00:16:08.098 "traddr": "10.0.0.1", 00:16:08.098 "trsvcid": "57830" 00:16:08.098 }, 00:16:08.098 "auth": { 00:16:08.098 "state": "completed", 00:16:08.098 "digest": "sha384", 00:16:08.098 "dhgroup": "ffdhe6144" 00:16:08.098 } 00:16:08.098 } 00:16:08.098 ]' 00:16:08.098 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.357 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.357 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.357 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.357 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.357 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.357 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.357 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.615 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:08.615 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:09.181 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.181 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:09.181 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.181 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.181 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.181 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.181 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.181 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.181 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.182 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.748 00:16:09.748 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.748 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.748 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.007 { 00:16:10.007 "cntlid": 89, 00:16:10.007 "qid": 0, 00:16:10.007 "state": "enabled", 00:16:10.007 "thread": "nvmf_tgt_poll_group_000", 00:16:10.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:10.007 "listen_address": { 00:16:10.007 "trtype": "TCP", 00:16:10.007 "adrfam": "IPv4", 00:16:10.007 "traddr": "10.0.0.2", 00:16:10.007 "trsvcid": "4420" 00:16:10.007 }, 00:16:10.007 "peer_address": { 00:16:10.007 "trtype": "TCP", 00:16:10.007 "adrfam": "IPv4", 00:16:10.007 "traddr": "10.0.0.1", 00:16:10.007 "trsvcid": "57854" 00:16:10.007 }, 00:16:10.007 "auth": { 00:16:10.007 "state": "completed", 00:16:10.007 "digest": "sha384", 00:16:10.007 "dhgroup": "ffdhe8192" 00:16:10.007 } 00:16:10.007 } 00:16:10.007 ]' 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.007 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.007 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.007 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.007 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.266 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:10.266 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:10.833 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.833 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:10.833 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.833 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.833 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.833 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.833 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.833 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.091 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:11.091 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.091 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.091 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:11.091 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.091 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.091 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.091 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.091 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.091 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.091 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.091 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.091 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.657 00:16:11.657 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.657 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.657 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.916 { 00:16:11.916 "cntlid": 91, 00:16:11.916 "qid": 0, 00:16:11.916 "state": "enabled", 00:16:11.916 "thread": "nvmf_tgt_poll_group_000", 00:16:11.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:11.916 "listen_address": { 00:16:11.916 "trtype": "TCP", 00:16:11.916 "adrfam": "IPv4", 00:16:11.916 "traddr": "10.0.0.2", 00:16:11.916 "trsvcid": "4420" 00:16:11.916 }, 00:16:11.916 "peer_address": { 00:16:11.916 "trtype": "TCP", 00:16:11.916 "adrfam": "IPv4", 00:16:11.916 "traddr": "10.0.0.1", 00:16:11.916 "trsvcid": "57882" 00:16:11.916 }, 00:16:11.916 "auth": { 00:16:11.916 "state": "completed", 00:16:11.916 "digest": "sha384", 00:16:11.916 "dhgroup": "ffdhe8192" 00:16:11.916 } 00:16:11.916 } 00:16:11.916 ]' 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.916 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.917 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.917 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.917 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.917 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.175 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:12.175 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:12.743 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.743 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:12.743 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.743 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.743 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.743 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.743 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:12.743 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.002 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.261 00:16:13.261 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.261 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.261 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.520 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.520 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.520 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.520 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.520 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.520 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.520 { 00:16:13.520 "cntlid": 93, 00:16:13.520 "qid": 0, 00:16:13.520 "state": "enabled", 00:16:13.520 "thread": "nvmf_tgt_poll_group_000", 00:16:13.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:13.520 "listen_address": { 00:16:13.520 "trtype": "TCP", 00:16:13.520 "adrfam": "IPv4", 00:16:13.520 "traddr": "10.0.0.2", 00:16:13.520 "trsvcid": "4420" 00:16:13.520 }, 00:16:13.520 "peer_address": { 00:16:13.520 "trtype": "TCP", 00:16:13.520 "adrfam": "IPv4", 00:16:13.520 "traddr": "10.0.0.1", 00:16:13.520 "trsvcid": "57908" 00:16:13.520 }, 00:16:13.520 "auth": { 00:16:13.520 "state": "completed", 00:16:13.520 "digest": "sha384", 00:16:13.520 "dhgroup": "ffdhe8192" 00:16:13.520 } 00:16:13.520 } 00:16:13.520 ]' 00:16:13.520 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.520 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.520 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.778 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:13.778 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.778 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.778 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.778 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.087 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:14.087 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:14.379 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.379 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:14.379 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.379 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.379 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.379 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.379 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.379 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.670 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.238 00:16:15.238 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.238 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.238 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.238 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.238 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.238 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.238 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.497 { 00:16:15.497 "cntlid": 95, 00:16:15.497 "qid": 0, 00:16:15.497 "state": "enabled", 00:16:15.497 "thread": "nvmf_tgt_poll_group_000", 00:16:15.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:15.497 "listen_address": { 00:16:15.497 "trtype": "TCP", 00:16:15.497 "adrfam": "IPv4", 00:16:15.497 "traddr": "10.0.0.2", 00:16:15.497 "trsvcid": "4420" 00:16:15.497 }, 00:16:15.497 "peer_address": { 00:16:15.497 "trtype": "TCP", 00:16:15.497 "adrfam": "IPv4", 00:16:15.497 "traddr": "10.0.0.1", 00:16:15.497 "trsvcid": "57936" 00:16:15.497 }, 00:16:15.497 "auth": { 00:16:15.497 "state": "completed", 00:16:15.497 "digest": "sha384", 00:16:15.497 "dhgroup": "ffdhe8192" 00:16:15.497 } 00:16:15.497 } 00:16:15.497 ]' 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.497 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.756 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:15.756 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:16.322 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.323 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:16.323 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.323 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.323 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.323 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:16.323 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.323 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.323 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:16.323 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.581 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.839 00:16:16.839 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.839 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.839 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.839 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.839 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.098 { 00:16:17.098 "cntlid": 97, 00:16:17.098 "qid": 0, 00:16:17.098 "state": "enabled", 00:16:17.098 "thread": "nvmf_tgt_poll_group_000", 00:16:17.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:17.098 "listen_address": { 00:16:17.098 "trtype": "TCP", 00:16:17.098 "adrfam": "IPv4", 00:16:17.098 "traddr": "10.0.0.2", 00:16:17.098 "trsvcid": "4420" 00:16:17.098 }, 00:16:17.098 "peer_address": { 00:16:17.098 "trtype": "TCP", 00:16:17.098 "adrfam": "IPv4", 00:16:17.098 "traddr": "10.0.0.1", 00:16:17.098 "trsvcid": "43322" 00:16:17.098 }, 00:16:17.098 "auth": { 00:16:17.098 "state": "completed", 00:16:17.098 "digest": "sha512", 00:16:17.098 "dhgroup": "null" 00:16:17.098 } 00:16:17.098 } 00:16:17.098 ]' 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.098 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.355 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:17.355 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.921 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.922 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.922 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.922 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.179 00:16:18.179 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.180 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.180 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.438 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.438 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.438 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.438 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.438 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.438 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.438 { 00:16:18.438 "cntlid": 99, 00:16:18.438 "qid": 0, 00:16:18.438 "state": "enabled", 00:16:18.438 "thread": "nvmf_tgt_poll_group_000", 00:16:18.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:18.438 "listen_address": { 00:16:18.438 "trtype": "TCP", 00:16:18.438 "adrfam": "IPv4", 00:16:18.438 "traddr": "10.0.0.2", 00:16:18.438 "trsvcid": "4420" 00:16:18.438 }, 00:16:18.438 "peer_address": { 00:16:18.438 "trtype": "TCP", 00:16:18.438 "adrfam": "IPv4", 00:16:18.438 "traddr": "10.0.0.1", 00:16:18.438 "trsvcid": "43348" 00:16:18.438 }, 00:16:18.438 "auth": { 00:16:18.438 "state": "completed", 00:16:18.438 "digest": "sha512", 00:16:18.438 "dhgroup": "null" 00:16:18.438 } 00:16:18.438 } 00:16:18.438 ]' 00:16:18.438 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.438 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.438 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.696 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.696 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.696 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.696 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.696 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.954 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:18.954 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.521 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.780 00:16:19.780 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.780 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.780 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.038 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.038 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.038 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.038 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.038 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.038 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.038 { 00:16:20.038 "cntlid": 101, 00:16:20.038 "qid": 0, 00:16:20.038 "state": "enabled", 00:16:20.038 "thread": "nvmf_tgt_poll_group_000", 00:16:20.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:20.038 "listen_address": { 00:16:20.038 "trtype": "TCP", 00:16:20.038 "adrfam": "IPv4", 00:16:20.038 "traddr": "10.0.0.2", 00:16:20.038 "trsvcid": "4420" 00:16:20.038 }, 00:16:20.038 "peer_address": { 00:16:20.038 "trtype": "TCP", 00:16:20.038 "adrfam": "IPv4", 00:16:20.038 "traddr": "10.0.0.1", 00:16:20.038 "trsvcid": "43376" 00:16:20.038 }, 00:16:20.038 "auth": { 00:16:20.038 "state": "completed", 00:16:20.038 "digest": "sha512", 00:16:20.038 "dhgroup": "null" 00:16:20.038 } 00:16:20.038 } 00:16:20.038 ]' 00:16:20.038 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.038 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.038 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.038 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.038 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.296 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.296 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.296 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.297 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:20.297 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:20.862 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.862 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:20.863 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.863 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.863 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.863 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.863 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.863 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.121 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.378 00:16:21.378 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.378 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.378 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.637 { 00:16:21.637 "cntlid": 103, 00:16:21.637 "qid": 0, 00:16:21.637 "state": "enabled", 00:16:21.637 "thread": "nvmf_tgt_poll_group_000", 00:16:21.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:21.637 "listen_address": { 00:16:21.637 "trtype": "TCP", 00:16:21.637 "adrfam": "IPv4", 00:16:21.637 "traddr": "10.0.0.2", 00:16:21.637 "trsvcid": "4420" 00:16:21.637 }, 00:16:21.637 "peer_address": { 00:16:21.637 "trtype": "TCP", 00:16:21.637 "adrfam": "IPv4", 00:16:21.637 "traddr": "10.0.0.1", 00:16:21.637 "trsvcid": "43404" 00:16:21.637 }, 00:16:21.637 "auth": { 00:16:21.637 "state": "completed", 00:16:21.637 "digest": "sha512", 00:16:21.637 "dhgroup": "null" 00:16:21.637 } 00:16:21.637 } 00:16:21.637 ]' 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.637 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.896 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:21.896 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:22.460 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.460 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:22.460 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.460 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.460 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.460 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.460 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.460 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.460 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.717 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.974 00:16:22.974 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.974 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.974 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.233 { 00:16:23.233 "cntlid": 105, 00:16:23.233 "qid": 0, 00:16:23.233 "state": "enabled", 00:16:23.233 "thread": "nvmf_tgt_poll_group_000", 00:16:23.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:23.233 "listen_address": { 00:16:23.233 "trtype": "TCP", 00:16:23.233 "adrfam": "IPv4", 00:16:23.233 "traddr": "10.0.0.2", 00:16:23.233 "trsvcid": "4420" 00:16:23.233 }, 00:16:23.233 "peer_address": { 00:16:23.233 "trtype": "TCP", 00:16:23.233 "adrfam": "IPv4", 00:16:23.233 "traddr": "10.0.0.1", 00:16:23.233 "trsvcid": "43432" 00:16:23.233 }, 00:16:23.233 "auth": { 00:16:23.233 "state": "completed", 00:16:23.233 "digest": "sha512", 00:16:23.233 "dhgroup": "ffdhe2048" 00:16:23.233 } 00:16:23.233 } 00:16:23.233 ]' 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.233 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.491 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:23.491 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:24.058 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.058 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.058 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.058 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.058 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.058 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.058 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.058 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.316 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.574 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.574 { 00:16:24.574 "cntlid": 107, 00:16:24.574 "qid": 0, 00:16:24.574 "state": "enabled", 00:16:24.574 "thread": "nvmf_tgt_poll_group_000", 00:16:24.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:24.574 "listen_address": { 00:16:24.574 "trtype": "TCP", 00:16:24.574 "adrfam": "IPv4", 00:16:24.574 "traddr": "10.0.0.2", 00:16:24.574 "trsvcid": "4420" 00:16:24.574 }, 00:16:24.574 "peer_address": { 00:16:24.574 "trtype": "TCP", 00:16:24.574 "adrfam": "IPv4", 00:16:24.574 "traddr": "10.0.0.1", 00:16:24.574 "trsvcid": "43472" 00:16:24.574 }, 00:16:24.574 "auth": { 00:16:24.574 "state": "completed", 00:16:24.574 "digest": "sha512", 00:16:24.574 "dhgroup": "ffdhe2048" 00:16:24.574 } 00:16:24.574 } 00:16:24.574 ]' 00:16:24.574 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.832 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.832 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.832 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.832 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.832 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.832 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.832 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.092 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:25.092 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:25.659 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.659 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:25.659 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.659 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.659 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.659 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.659 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:25.659 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.916 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.916 00:16:26.173 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.173 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.173 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.173 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.173 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.173 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.173 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.173 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.173 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.173 { 00:16:26.173 "cntlid": 109, 00:16:26.173 "qid": 0, 00:16:26.173 "state": "enabled", 00:16:26.173 "thread": "nvmf_tgt_poll_group_000", 00:16:26.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:26.173 "listen_address": { 00:16:26.173 "trtype": "TCP", 00:16:26.173 "adrfam": "IPv4", 00:16:26.173 "traddr": "10.0.0.2", 00:16:26.173 "trsvcid": "4420" 00:16:26.173 }, 00:16:26.173 "peer_address": { 00:16:26.174 "trtype": "TCP", 00:16:26.174 "adrfam": "IPv4", 00:16:26.174 "traddr": "10.0.0.1", 00:16:26.174 "trsvcid": "38910" 00:16:26.174 }, 00:16:26.174 "auth": { 00:16:26.174 "state": "completed", 00:16:26.174 "digest": "sha512", 00:16:26.174 "dhgroup": "ffdhe2048" 00:16:26.174 } 00:16:26.174 } 00:16:26.174 ]' 00:16:26.174 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.431 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.431 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.431 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.431 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.431 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.431 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.431 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.688 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:26.688 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:27.254 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.254 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:27.254 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.254 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.254 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.254 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.254 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.254 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.513 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.772 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.772 { 00:16:27.772 "cntlid": 111, 00:16:27.772 "qid": 0, 00:16:27.772 "state": "enabled", 00:16:27.772 "thread": "nvmf_tgt_poll_group_000", 00:16:27.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:27.772 "listen_address": { 00:16:27.772 "trtype": "TCP", 00:16:27.772 "adrfam": "IPv4", 00:16:27.772 "traddr": "10.0.0.2", 00:16:27.772 "trsvcid": "4420" 00:16:27.772 }, 00:16:27.772 "peer_address": { 00:16:27.772 "trtype": "TCP", 00:16:27.772 "adrfam": "IPv4", 00:16:27.772 "traddr": "10.0.0.1", 00:16:27.772 "trsvcid": "38938" 00:16:27.772 }, 00:16:27.772 "auth": { 00:16:27.772 "state": "completed", 00:16:27.772 "digest": "sha512", 00:16:27.772 "dhgroup": "ffdhe2048" 00:16:27.772 } 00:16:27.772 } 00:16:27.772 ]' 00:16:27.772 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.030 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.030 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.030 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.030 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.030 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.030 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.030 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.288 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:28.288 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.855 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.113 00:16:29.113 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.113 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.113 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.371 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.371 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.371 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.371 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.371 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.371 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.371 { 00:16:29.371 "cntlid": 113, 00:16:29.371 "qid": 0, 00:16:29.371 "state": "enabled", 00:16:29.371 "thread": "nvmf_tgt_poll_group_000", 00:16:29.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:29.371 "listen_address": { 00:16:29.371 "trtype": "TCP", 00:16:29.371 "adrfam": "IPv4", 00:16:29.371 "traddr": "10.0.0.2", 00:16:29.371 "trsvcid": "4420" 00:16:29.371 }, 00:16:29.371 "peer_address": { 00:16:29.371 "trtype": "TCP", 00:16:29.371 "adrfam": "IPv4", 00:16:29.371 "traddr": "10.0.0.1", 00:16:29.371 "trsvcid": "38964" 00:16:29.371 }, 00:16:29.371 "auth": { 00:16:29.371 "state": "completed", 00:16:29.371 "digest": "sha512", 00:16:29.371 "dhgroup": "ffdhe3072" 00:16:29.371 } 00:16:29.371 } 00:16:29.371 ]' 00:16:29.371 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.629 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.629 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.629 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.629 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.629 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.629 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.629 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.887 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:29.887 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:30.453 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.453 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.454 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.712 00:16:30.712 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.712 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.712 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.971 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.971 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.971 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.971 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.971 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.971 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.971 { 00:16:30.971 "cntlid": 115, 00:16:30.971 "qid": 0, 00:16:30.971 "state": "enabled", 00:16:30.971 "thread": "nvmf_tgt_poll_group_000", 00:16:30.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:30.971 "listen_address": { 00:16:30.971 "trtype": "TCP", 00:16:30.971 "adrfam": "IPv4", 00:16:30.971 "traddr": "10.0.0.2", 00:16:30.971 "trsvcid": "4420" 00:16:30.971 }, 00:16:30.971 "peer_address": { 00:16:30.971 "trtype": "TCP", 00:16:30.971 "adrfam": "IPv4", 00:16:30.971 "traddr": "10.0.0.1", 00:16:30.971 "trsvcid": "38986" 00:16:30.971 }, 00:16:30.971 "auth": { 00:16:30.971 "state": "completed", 00:16:30.971 "digest": "sha512", 00:16:30.971 "dhgroup": "ffdhe3072" 00:16:30.971 } 00:16:30.971 } 00:16:30.971 ]' 00:16:30.971 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.971 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.971 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.227 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.227 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.227 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.227 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.227 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.227 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:31.227 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:31.794 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.794 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:31.794 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.794 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.794 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.794 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.794 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.794 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:32.053 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:32.053 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.053 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.053 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.053 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.053 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.053 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.053 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.053 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.053 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.053 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.053 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.053 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.311 00:16:32.311 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.311 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.312 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.570 { 00:16:32.570 "cntlid": 117, 00:16:32.570 "qid": 0, 00:16:32.570 "state": "enabled", 00:16:32.570 "thread": "nvmf_tgt_poll_group_000", 00:16:32.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:32.570 "listen_address": { 00:16:32.570 "trtype": "TCP", 00:16:32.570 "adrfam": "IPv4", 00:16:32.570 "traddr": "10.0.0.2", 00:16:32.570 "trsvcid": "4420" 00:16:32.570 }, 00:16:32.570 "peer_address": { 00:16:32.570 "trtype": "TCP", 00:16:32.570 "adrfam": "IPv4", 00:16:32.570 "traddr": "10.0.0.1", 00:16:32.570 "trsvcid": "39018" 00:16:32.570 }, 00:16:32.570 "auth": { 00:16:32.570 "state": "completed", 00:16:32.570 "digest": "sha512", 00:16:32.570 "dhgroup": "ffdhe3072" 00:16:32.570 } 00:16:32.570 } 00:16:32.570 ]' 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.570 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.829 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:32.829 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:33.396 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.396 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:33.396 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.396 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.396 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.396 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.396 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.396 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.654 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.913 00:16:33.913 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.913 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.913 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.170 { 00:16:34.170 "cntlid": 119, 00:16:34.170 "qid": 0, 00:16:34.170 "state": "enabled", 00:16:34.170 "thread": "nvmf_tgt_poll_group_000", 00:16:34.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:34.170 "listen_address": { 00:16:34.170 "trtype": "TCP", 00:16:34.170 "adrfam": "IPv4", 00:16:34.170 "traddr": "10.0.0.2", 00:16:34.170 "trsvcid": "4420" 00:16:34.170 }, 00:16:34.170 "peer_address": { 00:16:34.170 "trtype": "TCP", 00:16:34.170 "adrfam": "IPv4", 00:16:34.170 "traddr": "10.0.0.1", 00:16:34.170 "trsvcid": "39046" 00:16:34.170 }, 00:16:34.170 "auth": { 00:16:34.170 "state": "completed", 00:16:34.170 "digest": "sha512", 00:16:34.170 "dhgroup": "ffdhe3072" 00:16:34.170 } 00:16:34.170 } 00:16:34.170 ]' 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.170 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.428 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:34.428 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:34.995 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.995 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:34.995 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.996 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.996 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.996 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.996 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.996 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.996 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.254 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.513 00:16:35.513 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.513 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.513 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.772 { 00:16:35.772 "cntlid": 121, 00:16:35.772 "qid": 0, 00:16:35.772 "state": "enabled", 00:16:35.772 "thread": "nvmf_tgt_poll_group_000", 00:16:35.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:35.772 "listen_address": { 00:16:35.772 "trtype": "TCP", 00:16:35.772 "adrfam": "IPv4", 00:16:35.772 "traddr": "10.0.0.2", 00:16:35.772 "trsvcid": "4420" 00:16:35.772 }, 00:16:35.772 "peer_address": { 00:16:35.772 "trtype": "TCP", 00:16:35.772 "adrfam": "IPv4", 00:16:35.772 "traddr": "10.0.0.1", 00:16:35.772 "trsvcid": "39082" 00:16:35.772 }, 00:16:35.772 "auth": { 00:16:35.772 "state": "completed", 00:16:35.772 "digest": "sha512", 00:16:35.772 "dhgroup": "ffdhe4096" 00:16:35.772 } 00:16:35.772 } 00:16:35.772 ]' 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.772 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.031 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:36.031 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:36.598 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.598 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:36.598 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.598 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.598 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.598 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.598 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.598 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.858 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.116 00:16:37.116 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.116 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.116 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.375 { 00:16:37.375 "cntlid": 123, 00:16:37.375 "qid": 0, 00:16:37.375 "state": "enabled", 00:16:37.375 "thread": "nvmf_tgt_poll_group_000", 00:16:37.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:37.375 "listen_address": { 00:16:37.375 "trtype": "TCP", 00:16:37.375 "adrfam": "IPv4", 00:16:37.375 "traddr": "10.0.0.2", 00:16:37.375 "trsvcid": "4420" 00:16:37.375 }, 00:16:37.375 "peer_address": { 00:16:37.375 "trtype": "TCP", 00:16:37.375 "adrfam": "IPv4", 00:16:37.375 "traddr": "10.0.0.1", 00:16:37.375 "trsvcid": "43252" 00:16:37.375 }, 00:16:37.375 "auth": { 00:16:37.375 "state": "completed", 00:16:37.375 "digest": "sha512", 00:16:37.375 "dhgroup": "ffdhe4096" 00:16:37.375 } 00:16:37.375 } 00:16:37.375 ]' 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.375 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.634 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:37.634 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:38.201 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.201 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:38.201 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.201 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.201 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.201 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.201 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.201 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.460 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.718 00:16:38.718 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.718 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.718 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.977 { 00:16:38.977 "cntlid": 125, 00:16:38.977 "qid": 0, 00:16:38.977 "state": "enabled", 00:16:38.977 "thread": "nvmf_tgt_poll_group_000", 00:16:38.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:38.977 "listen_address": { 00:16:38.977 "trtype": "TCP", 00:16:38.977 "adrfam": "IPv4", 00:16:38.977 "traddr": "10.0.0.2", 00:16:38.977 "trsvcid": "4420" 00:16:38.977 }, 00:16:38.977 "peer_address": { 00:16:38.977 "trtype": "TCP", 00:16:38.977 "adrfam": "IPv4", 00:16:38.977 "traddr": "10.0.0.1", 00:16:38.977 "trsvcid": "43280" 00:16:38.977 }, 00:16:38.977 "auth": { 00:16:38.977 "state": "completed", 00:16:38.977 "digest": "sha512", 00:16:38.977 "dhgroup": "ffdhe4096" 00:16:38.977 } 00:16:38.977 } 00:16:38.977 ]' 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.977 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.235 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:39.235 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:39.802 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.802 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.802 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.802 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.802 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.802 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.802 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.802 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.060 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.318 00:16:40.318 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.318 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.318 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.576 { 00:16:40.576 "cntlid": 127, 00:16:40.576 "qid": 0, 00:16:40.576 "state": "enabled", 00:16:40.576 "thread": "nvmf_tgt_poll_group_000", 00:16:40.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:40.576 "listen_address": { 00:16:40.576 "trtype": "TCP", 00:16:40.576 "adrfam": "IPv4", 00:16:40.576 "traddr": "10.0.0.2", 00:16:40.576 "trsvcid": "4420" 00:16:40.576 }, 00:16:40.576 "peer_address": { 00:16:40.576 "trtype": "TCP", 00:16:40.576 "adrfam": "IPv4", 00:16:40.576 "traddr": "10.0.0.1", 00:16:40.576 "trsvcid": "43316" 00:16:40.576 }, 00:16:40.576 "auth": { 00:16:40.576 "state": "completed", 00:16:40.576 "digest": "sha512", 00:16:40.576 "dhgroup": "ffdhe4096" 00:16:40.576 } 00:16:40.576 } 00:16:40.576 ]' 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.576 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.834 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:40.834 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:41.400 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.400 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:41.400 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.400 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.400 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.400 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.400 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.400 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.400 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.659 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.917 00:16:42.175 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.175 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.175 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.175 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.175 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.175 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.175 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.175 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.175 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.175 { 00:16:42.175 "cntlid": 129, 00:16:42.175 "qid": 0, 00:16:42.175 "state": "enabled", 00:16:42.175 "thread": "nvmf_tgt_poll_group_000", 00:16:42.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:42.175 "listen_address": { 00:16:42.175 "trtype": "TCP", 00:16:42.175 "adrfam": "IPv4", 00:16:42.175 "traddr": "10.0.0.2", 00:16:42.175 "trsvcid": "4420" 00:16:42.175 }, 00:16:42.175 "peer_address": { 00:16:42.175 "trtype": "TCP", 00:16:42.175 "adrfam": "IPv4", 00:16:42.175 "traddr": "10.0.0.1", 00:16:42.175 "trsvcid": "43350" 00:16:42.175 }, 00:16:42.175 "auth": { 00:16:42.175 "state": "completed", 00:16:42.175 "digest": "sha512", 00:16:42.175 "dhgroup": "ffdhe6144" 00:16:42.175 } 00:16:42.175 } 00:16:42.175 ]' 00:16:42.175 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.175 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.175 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.434 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.434 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.434 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.434 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.434 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.692 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:42.692 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.260 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.827 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.827 { 00:16:43.827 "cntlid": 131, 00:16:43.827 "qid": 0, 00:16:43.827 "state": "enabled", 00:16:43.827 "thread": "nvmf_tgt_poll_group_000", 00:16:43.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:43.827 "listen_address": { 00:16:43.827 "trtype": "TCP", 00:16:43.827 "adrfam": "IPv4", 00:16:43.827 "traddr": "10.0.0.2", 00:16:43.827 "trsvcid": "4420" 00:16:43.827 }, 00:16:43.827 "peer_address": { 00:16:43.827 "trtype": "TCP", 00:16:43.827 "adrfam": "IPv4", 00:16:43.827 "traddr": "10.0.0.1", 00:16:43.827 "trsvcid": "43380" 00:16:43.827 }, 00:16:43.827 "auth": { 00:16:43.827 "state": "completed", 00:16:43.827 "digest": "sha512", 00:16:43.827 "dhgroup": "ffdhe6144" 00:16:43.827 } 00:16:43.827 } 00:16:43.827 ]' 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.827 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.086 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.086 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.086 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.086 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.086 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.344 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:44.344 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.912 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.479 00:16:45.479 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.479 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.479 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.479 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.479 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.480 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.480 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.480 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.480 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.480 { 00:16:45.480 "cntlid": 133, 00:16:45.480 "qid": 0, 00:16:45.480 "state": "enabled", 00:16:45.480 "thread": "nvmf_tgt_poll_group_000", 00:16:45.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:45.480 "listen_address": { 00:16:45.480 "trtype": "TCP", 00:16:45.480 "adrfam": "IPv4", 00:16:45.480 "traddr": "10.0.0.2", 00:16:45.480 "trsvcid": "4420" 00:16:45.480 }, 00:16:45.480 "peer_address": { 00:16:45.480 "trtype": "TCP", 00:16:45.480 "adrfam": "IPv4", 00:16:45.480 "traddr": "10.0.0.1", 00:16:45.480 "trsvcid": "43394" 00:16:45.480 }, 00:16:45.480 "auth": { 00:16:45.480 "state": "completed", 00:16:45.480 "digest": "sha512", 00:16:45.480 "dhgroup": "ffdhe6144" 00:16:45.480 } 00:16:45.480 } 00:16:45.480 ]' 00:16:45.480 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.738 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.738 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.738 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.738 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.738 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.738 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.738 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.997 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:45.997 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:46.565 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.565 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:46.565 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.565 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.565 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.565 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.565 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.565 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.823 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.082 00:16:47.082 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.082 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.082 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.341 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.341 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.341 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.341 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.341 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.341 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.341 { 00:16:47.341 "cntlid": 135, 00:16:47.341 "qid": 0, 00:16:47.341 "state": "enabled", 00:16:47.341 "thread": "nvmf_tgt_poll_group_000", 00:16:47.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:47.341 "listen_address": { 00:16:47.341 "trtype": "TCP", 00:16:47.341 "adrfam": "IPv4", 00:16:47.341 "traddr": "10.0.0.2", 00:16:47.341 "trsvcid": "4420" 00:16:47.341 }, 00:16:47.341 "peer_address": { 00:16:47.341 "trtype": "TCP", 00:16:47.341 "adrfam": "IPv4", 00:16:47.341 "traddr": "10.0.0.1", 00:16:47.341 "trsvcid": "46904" 00:16:47.341 }, 00:16:47.341 "auth": { 00:16:47.341 "state": "completed", 00:16:47.341 "digest": "sha512", 00:16:47.341 "dhgroup": "ffdhe6144" 00:16:47.341 } 00:16:47.341 } 00:16:47.341 ]' 00:16:47.341 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.342 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.342 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.342 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.342 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.342 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.342 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.342 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:47.600 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:48.167 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.167 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:48.168 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.168 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.168 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.168 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.168 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.168 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.168 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.994 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.994 { 00:16:48.994 "cntlid": 137, 00:16:48.994 "qid": 0, 00:16:48.994 "state": "enabled", 00:16:48.994 "thread": "nvmf_tgt_poll_group_000", 00:16:48.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:48.994 "listen_address": { 00:16:48.994 "trtype": "TCP", 00:16:48.994 "adrfam": "IPv4", 00:16:48.994 "traddr": "10.0.0.2", 00:16:48.994 "trsvcid": "4420" 00:16:48.994 }, 00:16:48.994 "peer_address": { 00:16:48.994 "trtype": "TCP", 00:16:48.994 "adrfam": "IPv4", 00:16:48.994 "traddr": "10.0.0.1", 00:16:48.994 "trsvcid": "46930" 00:16:48.994 }, 00:16:48.994 "auth": { 00:16:48.994 "state": "completed", 00:16:48.994 "digest": "sha512", 00:16:48.994 "dhgroup": "ffdhe8192" 00:16:48.994 } 00:16:48.994 } 00:16:48.994 ]' 00:16:48.994 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.253 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.511 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:49.512 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:50.078 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.078 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:50.078 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.078 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.078 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.078 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.078 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.078 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.078 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:50.078 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.078 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.078 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.078 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.078 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.078 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.078 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.078 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.337 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.337 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.337 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.337 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.595 00:16:50.595 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.595 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.595 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.854 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.854 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.854 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.854 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.854 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.854 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.854 { 00:16:50.854 "cntlid": 139, 00:16:50.854 "qid": 0, 00:16:50.854 "state": "enabled", 00:16:50.854 "thread": "nvmf_tgt_poll_group_000", 00:16:50.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:50.854 "listen_address": { 00:16:50.854 "trtype": "TCP", 00:16:50.854 "adrfam": "IPv4", 00:16:50.854 "traddr": "10.0.0.2", 00:16:50.854 "trsvcid": "4420" 00:16:50.854 }, 00:16:50.854 "peer_address": { 00:16:50.854 "trtype": "TCP", 00:16:50.854 "adrfam": "IPv4", 00:16:50.854 "traddr": "10.0.0.1", 00:16:50.854 "trsvcid": "46954" 00:16:50.854 }, 00:16:50.854 "auth": { 00:16:50.854 "state": "completed", 00:16:50.854 "digest": "sha512", 00:16:50.854 "dhgroup": "ffdhe8192" 00:16:50.854 } 00:16:50.854 } 00:16:50.854 ]' 00:16:50.854 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.854 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.854 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.113 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.113 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.113 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.113 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.113 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.113 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:51.113 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: --dhchap-ctrl-secret DHHC-1:02:NDFlMTY3M2Y1ZjMzZTIyZDgyMjRjYzhlNzVjZDkxMWU3NmQyY2M3NTg0ZjdiZDNmAuQ/6g==: 00:16:51.698 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.698 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:51.698 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.698 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.698 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.698 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.698 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.698 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.007 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.575 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.575 { 00:16:52.575 "cntlid": 141, 00:16:52.575 "qid": 0, 00:16:52.575 "state": "enabled", 00:16:52.575 "thread": "nvmf_tgt_poll_group_000", 00:16:52.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:52.575 "listen_address": { 00:16:52.575 "trtype": "TCP", 00:16:52.575 "adrfam": "IPv4", 00:16:52.575 "traddr": "10.0.0.2", 00:16:52.575 "trsvcid": "4420" 00:16:52.575 }, 00:16:52.575 "peer_address": { 00:16:52.575 "trtype": "TCP", 00:16:52.575 "adrfam": "IPv4", 00:16:52.575 "traddr": "10.0.0.1", 00:16:52.575 "trsvcid": "46976" 00:16:52.575 }, 00:16:52.575 "auth": { 00:16:52.575 "state": "completed", 00:16:52.575 "digest": "sha512", 00:16:52.575 "dhgroup": "ffdhe8192" 00:16:52.575 } 00:16:52.575 } 00:16:52.575 ]' 00:16:52.575 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.834 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.834 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.834 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.834 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.834 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.834 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.834 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.093 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:53.093 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:01:MzM0M2M0YzhkZDM0NTExZjY2ODg0YjYzNGE1ZTk5ZDE1X0Ef: 00:16:53.660 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.660 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:53.660 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.660 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.660 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.660 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.660 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:53.660 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.919 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.178 00:16:54.178 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.178 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.178 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.437 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.437 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.437 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.437 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.437 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.437 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.437 { 00:16:54.437 "cntlid": 143, 00:16:54.437 "qid": 0, 00:16:54.437 "state": "enabled", 00:16:54.437 "thread": "nvmf_tgt_poll_group_000", 00:16:54.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:54.437 "listen_address": { 00:16:54.437 "trtype": "TCP", 00:16:54.437 "adrfam": "IPv4", 00:16:54.437 "traddr": "10.0.0.2", 00:16:54.437 "trsvcid": "4420" 00:16:54.437 }, 00:16:54.437 "peer_address": { 00:16:54.437 "trtype": "TCP", 00:16:54.437 "adrfam": "IPv4", 00:16:54.437 "traddr": "10.0.0.1", 00:16:54.437 "trsvcid": "46996" 00:16:54.437 }, 00:16:54.437 "auth": { 00:16:54.437 "state": "completed", 00:16:54.437 "digest": "sha512", 00:16:54.437 "dhgroup": "ffdhe8192" 00:16:54.437 } 00:16:54.437 } 00:16:54.437 ]' 00:16:54.437 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.437 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.437 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.696 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.696 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.696 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.696 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.696 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.696 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:54.696 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:16:55.263 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.263 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:55.263 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.263 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.263 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.522 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.090 00:16:56.090 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.090 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.090 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.348 { 00:16:56.348 "cntlid": 145, 00:16:56.348 "qid": 0, 00:16:56.348 "state": "enabled", 00:16:56.348 "thread": "nvmf_tgt_poll_group_000", 00:16:56.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:56.348 "listen_address": { 00:16:56.348 "trtype": "TCP", 00:16:56.348 "adrfam": "IPv4", 00:16:56.348 "traddr": "10.0.0.2", 00:16:56.348 "trsvcid": "4420" 00:16:56.348 }, 00:16:56.348 "peer_address": { 00:16:56.348 "trtype": "TCP", 00:16:56.348 "adrfam": "IPv4", 00:16:56.348 "traddr": "10.0.0.1", 00:16:56.348 "trsvcid": "47008" 00:16:56.348 }, 00:16:56.348 "auth": { 00:16:56.348 "state": "completed", 00:16:56.348 "digest": "sha512", 00:16:56.348 "dhgroup": "ffdhe8192" 00:16:56.348 } 00:16:56.348 } 00:16:56.348 ]' 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.348 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.607 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:56.607 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjkyMjY4NmFjOWQ0Y2IzZWY0OGQ1Y2E3NTgzZjZlODg0M2IxZmQ4YzYwZjU0NTdj7PV+JA==: --dhchap-ctrl-secret DHHC-1:03:ZWZmMTM1MWRjZjk5MzM2OWRmZmIxYWI2NTZhMzgwZDA4YjE2ZjAzYzAzYzdjZGQ1ZDEwZTc2OWM0NTVkYjBmZc7fSUk=: 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:57.175 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:57.743 request: 00:16:57.743 { 00:16:57.743 "name": "nvme0", 00:16:57.743 "trtype": "tcp", 00:16:57.743 "traddr": "10.0.0.2", 00:16:57.743 "adrfam": "ipv4", 00:16:57.743 "trsvcid": "4420", 00:16:57.743 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:57.743 "prchk_reftag": false, 00:16:57.743 "prchk_guard": false, 00:16:57.743 "hdgst": false, 00:16:57.743 "ddgst": false, 00:16:57.743 "dhchap_key": "key2", 00:16:57.743 "allow_unrecognized_csi": false, 00:16:57.743 "method": "bdev_nvme_attach_controller", 00:16:57.743 "req_id": 1 00:16:57.743 } 00:16:57.743 Got JSON-RPC error response 00:16:57.743 response: 00:16:57.743 { 00:16:57.743 "code": -5, 00:16:57.743 "message": "Input/output error" 00:16:57.743 } 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.743 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:58.312 request: 00:16:58.312 { 00:16:58.312 "name": "nvme0", 00:16:58.312 "trtype": "tcp", 00:16:58.312 "traddr": "10.0.0.2", 00:16:58.312 "adrfam": "ipv4", 00:16:58.312 "trsvcid": "4420", 00:16:58.312 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:58.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:58.312 "prchk_reftag": false, 00:16:58.312 "prchk_guard": false, 00:16:58.312 "hdgst": false, 00:16:58.312 "ddgst": false, 00:16:58.312 "dhchap_key": "key1", 00:16:58.312 "dhchap_ctrlr_key": "ckey2", 00:16:58.312 "allow_unrecognized_csi": false, 00:16:58.312 "method": "bdev_nvme_attach_controller", 00:16:58.312 "req_id": 1 00:16:58.312 } 00:16:58.312 Got JSON-RPC error response 00:16:58.312 response: 00:16:58.312 { 00:16:58.312 "code": -5, 00:16:58.312 "message": "Input/output error" 00:16:58.312 } 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.312 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.571 request: 00:16:58.571 { 00:16:58.571 "name": "nvme0", 00:16:58.571 "trtype": "tcp", 00:16:58.571 "traddr": "10.0.0.2", 00:16:58.571 "adrfam": "ipv4", 00:16:58.571 "trsvcid": "4420", 00:16:58.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:58.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:58.571 "prchk_reftag": false, 00:16:58.571 "prchk_guard": false, 00:16:58.571 "hdgst": false, 00:16:58.571 "ddgst": false, 00:16:58.571 "dhchap_key": "key1", 00:16:58.571 "dhchap_ctrlr_key": "ckey1", 00:16:58.571 "allow_unrecognized_csi": false, 00:16:58.571 "method": "bdev_nvme_attach_controller", 00:16:58.571 "req_id": 1 00:16:58.571 } 00:16:58.571 Got JSON-RPC error response 00:16:58.571 response: 00:16:58.571 { 00:16:58.571 "code": -5, 00:16:58.571 "message": "Input/output error" 00:16:58.571 } 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1496115 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1496115 ']' 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1496115 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1496115 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1496115' 00:16:58.571 killing process with pid 1496115 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1496115 00:16:58.571 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1496115 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1517985 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1517985 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1517985 ']' 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.830 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1517985 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1517985 ']' 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.089 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.348 null0 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jYA 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.8t1 ]] 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8t1 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Z75 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.348 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.hNz ]] 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hNz 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.G8U 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.moV ]] 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.moV 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jWN 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.349 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.608 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.175 nvme0n1 00:17:00.175 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.175 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.175 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.434 { 00:17:00.434 "cntlid": 1, 00:17:00.434 "qid": 0, 00:17:00.434 "state": "enabled", 00:17:00.434 "thread": "nvmf_tgt_poll_group_000", 00:17:00.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:00.434 "listen_address": { 00:17:00.434 "trtype": "TCP", 00:17:00.434 "adrfam": "IPv4", 00:17:00.434 "traddr": "10.0.0.2", 00:17:00.434 "trsvcid": "4420" 00:17:00.434 }, 00:17:00.434 "peer_address": { 00:17:00.434 "trtype": "TCP", 00:17:00.434 "adrfam": "IPv4", 00:17:00.434 "traddr": "10.0.0.1", 00:17:00.434 "trsvcid": "35354" 00:17:00.434 }, 00:17:00.434 "auth": { 00:17:00.434 "state": "completed", 00:17:00.434 "digest": "sha512", 00:17:00.434 "dhgroup": "ffdhe8192" 00:17:00.434 } 00:17:00.434 } 00:17:00.434 ]' 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.434 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.692 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.692 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.692 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.692 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:17:00.692 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:17:01.259 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.259 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:01.259 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.259 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.259 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.259 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:01.259 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.259 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.777 request: 00:17:01.777 { 00:17:01.777 "name": "nvme0", 00:17:01.777 "trtype": "tcp", 00:17:01.777 "traddr": "10.0.0.2", 00:17:01.777 "adrfam": "ipv4", 00:17:01.777 "trsvcid": "4420", 00:17:01.777 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:01.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:01.777 "prchk_reftag": false, 00:17:01.777 "prchk_guard": false, 00:17:01.777 "hdgst": false, 00:17:01.777 "ddgst": false, 00:17:01.777 "dhchap_key": "key3", 00:17:01.777 "allow_unrecognized_csi": false, 00:17:01.777 "method": "bdev_nvme_attach_controller", 00:17:01.777 "req_id": 1 00:17:01.777 } 00:17:01.777 Got JSON-RPC error response 00:17:01.777 response: 00:17:01.777 { 00:17:01.777 "code": -5, 00:17:01.777 "message": "Input/output error" 00:17:01.777 } 00:17:01.777 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:01.777 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:01.777 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:01.777 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:01.777 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:01.777 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:01.777 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:01.777 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.036 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.295 request: 00:17:02.295 { 00:17:02.295 "name": "nvme0", 00:17:02.295 "trtype": "tcp", 00:17:02.295 "traddr": "10.0.0.2", 00:17:02.295 "adrfam": "ipv4", 00:17:02.295 "trsvcid": "4420", 00:17:02.295 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:02.295 "prchk_reftag": false, 00:17:02.295 "prchk_guard": false, 00:17:02.295 "hdgst": false, 00:17:02.295 "ddgst": false, 00:17:02.295 "dhchap_key": "key3", 00:17:02.295 "allow_unrecognized_csi": false, 00:17:02.295 "method": "bdev_nvme_attach_controller", 00:17:02.295 "req_id": 1 00:17:02.295 } 00:17:02.295 Got JSON-RPC error response 00:17:02.295 response: 00:17:02.295 { 00:17:02.295 "code": -5, 00:17:02.295 "message": "Input/output error" 00:17:02.295 } 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.295 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.863 request: 00:17:02.863 { 00:17:02.863 "name": "nvme0", 00:17:02.863 "trtype": "tcp", 00:17:02.863 "traddr": "10.0.0.2", 00:17:02.863 "adrfam": "ipv4", 00:17:02.863 "trsvcid": "4420", 00:17:02.863 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:02.863 "prchk_reftag": false, 00:17:02.863 "prchk_guard": false, 00:17:02.863 "hdgst": false, 00:17:02.863 "ddgst": false, 00:17:02.863 "dhchap_key": "key0", 00:17:02.863 "dhchap_ctrlr_key": "key1", 00:17:02.863 "allow_unrecognized_csi": false, 00:17:02.863 "method": "bdev_nvme_attach_controller", 00:17:02.863 "req_id": 1 00:17:02.863 } 00:17:02.863 Got JSON-RPC error response 00:17:02.863 response: 00:17:02.863 { 00:17:02.863 "code": -5, 00:17:02.863 "message": "Input/output error" 00:17:02.863 } 00:17:02.863 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.863 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.863 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.863 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.863 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:02.863 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:02.863 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:03.121 nvme0n1 00:17:03.121 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:03.121 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:03.121 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.121 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.121 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.121 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.381 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:03.381 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.381 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.381 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.381 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:03.381 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:03.381 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:04.319 nvme0n1 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:04.319 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.578 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.578 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:17:04.578 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: --dhchap-ctrl-secret DHHC-1:03:NmY2NjMxZDBkMTgzNzk2NzI2YjAzM2E3M2U5NDE3Y2MwM2U2NTA0Y2IxOWVlNTYzNDRlYTFmZjBjODI1ZWFhM7hA1AQ=: 00:17:05.145 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:05.145 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:05.145 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:05.145 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:05.145 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:05.145 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:05.145 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:05.145 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.146 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.404 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:05.404 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:05.404 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:05.405 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:05.405 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.405 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:05.405 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.405 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:05.405 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:05.405 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:05.972 request: 00:17:05.972 { 00:17:05.972 "name": "nvme0", 00:17:05.972 "trtype": "tcp", 00:17:05.972 "traddr": "10.0.0.2", 00:17:05.972 "adrfam": "ipv4", 00:17:05.972 "trsvcid": "4420", 00:17:05.972 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:05.972 "prchk_reftag": false, 00:17:05.972 "prchk_guard": false, 00:17:05.972 "hdgst": false, 00:17:05.972 "ddgst": false, 00:17:05.972 "dhchap_key": "key1", 00:17:05.972 "allow_unrecognized_csi": false, 00:17:05.972 "method": "bdev_nvme_attach_controller", 00:17:05.972 "req_id": 1 00:17:05.972 } 00:17:05.972 Got JSON-RPC error response 00:17:05.972 response: 00:17:05.972 { 00:17:05.972 "code": -5, 00:17:05.972 "message": "Input/output error" 00:17:05.972 } 00:17:05.972 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.972 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.972 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.972 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.972 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.972 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.972 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:06.539 nvme0n1 00:17:06.539 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:06.539 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.539 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:06.798 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.798 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.798 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.057 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:07.057 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.057 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.057 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.057 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:07.057 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:07.057 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:07.316 nvme0n1 00:17:07.316 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:07.316 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:07.316 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: '' 2s 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: ]] 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzhlZWUwMGEwMjk3NWMyMTdlODZlNjE0NTNiODQ1YzfgcnyV: 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:07.575 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: 2s 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: ]] 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OGRkNmY3MzljOTk4OTQ3NzQ0MjhjN2RiM2E2ZGRiNWJkZjkwYTcxOTU1MjlmNGM1BAAzDQ==: 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:10.109 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.013 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.580 nvme0n1 00:17:12.580 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.580 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.580 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.580 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.580 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.580 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:13.148 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:13.148 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:13.148 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.148 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.148 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:13.148 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.148 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.148 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.148 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:13.148 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:13.407 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:13.407 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:13.407 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:13.666 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:14.233 request: 00:17:14.233 { 00:17:14.233 "name": "nvme0", 00:17:14.233 "dhchap_key": "key1", 00:17:14.233 "dhchap_ctrlr_key": "key3", 00:17:14.233 "method": "bdev_nvme_set_keys", 00:17:14.233 "req_id": 1 00:17:14.233 } 00:17:14.233 Got JSON-RPC error response 00:17:14.233 response: 00:17:14.233 { 00:17:14.233 "code": -13, 00:17:14.234 "message": "Permission denied" 00:17:14.234 } 00:17:14.234 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:14.234 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.234 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.234 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.234 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:14.234 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:14.234 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.234 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:14.234 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:15.169 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:15.169 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:15.169 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.428 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:15.428 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:15.428 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.428 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.428 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.428 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:15.428 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:15.428 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:16.364 nvme0n1 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:16.365 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:16.623 request: 00:17:16.623 { 00:17:16.623 "name": "nvme0", 00:17:16.623 "dhchap_key": "key2", 00:17:16.623 "dhchap_ctrlr_key": "key0", 00:17:16.623 "method": "bdev_nvme_set_keys", 00:17:16.623 "req_id": 1 00:17:16.623 } 00:17:16.623 Got JSON-RPC error response 00:17:16.623 response: 00:17:16.623 { 00:17:16.623 "code": -13, 00:17:16.623 "message": "Permission denied" 00:17:16.623 } 00:17:16.882 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:16.882 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.882 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.882 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.882 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:16.882 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:16.882 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.882 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:16.882 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:18.259 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:18.259 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:18.259 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1496316 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1496316 ']' 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1496316 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1496316 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1496316' 00:17:18.259 killing process with pid 1496316 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1496316 00:17:18.259 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1496316 00:17:18.518 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:18.518 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.519 rmmod nvme_tcp 00:17:18.519 rmmod nvme_fabrics 00:17:18.519 rmmod nvme_keyring 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1517985 ']' 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1517985 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1517985 ']' 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1517985 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1517985 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1517985' 00:17:18.519 killing process with pid 1517985 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1517985 00:17:18.519 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1517985 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.778 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.jYA /tmp/spdk.key-sha256.Z75 /tmp/spdk.key-sha384.G8U /tmp/spdk.key-sha512.jWN /tmp/spdk.key-sha512.8t1 /tmp/spdk.key-sha384.hNz /tmp/spdk.key-sha256.moV '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:21.313 00:17:21.313 real 2m31.982s 00:17:21.313 user 5m50.467s 00:17:21.313 sys 0m24.176s 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.313 ************************************ 00:17:21.313 END TEST nvmf_auth_target 00:17:21.313 ************************************ 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:21.313 ************************************ 00:17:21.313 START TEST nvmf_bdevio_no_huge 00:17:21.313 ************************************ 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:21.313 * Looking for test storage... 00:17:21.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:21.313 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.313 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:21.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.314 --rc genhtml_branch_coverage=1 00:17:21.314 --rc genhtml_function_coverage=1 00:17:21.314 --rc genhtml_legend=1 00:17:21.314 --rc geninfo_all_blocks=1 00:17:21.314 --rc geninfo_unexecuted_blocks=1 00:17:21.314 00:17:21.314 ' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:21.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.314 --rc genhtml_branch_coverage=1 00:17:21.314 --rc genhtml_function_coverage=1 00:17:21.314 --rc genhtml_legend=1 00:17:21.314 --rc geninfo_all_blocks=1 00:17:21.314 --rc geninfo_unexecuted_blocks=1 00:17:21.314 00:17:21.314 ' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:21.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.314 --rc genhtml_branch_coverage=1 00:17:21.314 --rc genhtml_function_coverage=1 00:17:21.314 --rc genhtml_legend=1 00:17:21.314 --rc geninfo_all_blocks=1 00:17:21.314 --rc geninfo_unexecuted_blocks=1 00:17:21.314 00:17:21.314 ' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:21.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.314 --rc genhtml_branch_coverage=1 00:17:21.314 --rc genhtml_function_coverage=1 00:17:21.314 --rc genhtml_legend=1 00:17:21.314 --rc geninfo_all_blocks=1 00:17:21.314 --rc geninfo_unexecuted_blocks=1 00:17:21.314 00:17:21.314 ' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:21.314 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:27.885 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:27.886 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:27.886 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:27.886 Found net devices under 0000:af:00.0: cvl_0_0 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:27.886 Found net devices under 0000:af:00.1: cvl_0_1 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:27.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:17:27.886 00:17:27.886 --- 10.0.0.2 ping statistics --- 00:17:27.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.886 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:17:27.886 00:17:27.886 --- 10.0.0.1 ping statistics --- 00:17:27.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.886 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1524724 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1524724 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1524724 ']' 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.886 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.886 [2024-12-12 10:31:01.019354] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:27.886 [2024-12-12 10:31:01.019398] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:27.887 [2024-12-12 10:31:01.098703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.887 [2024-12-12 10:31:01.144964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.887 [2024-12-12 10:31:01.144999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.887 [2024-12-12 10:31:01.145005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.887 [2024-12-12 10:31:01.145011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.887 [2024-12-12 10:31:01.145016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.887 [2024-12-12 10:31:01.146005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:17:27.887 [2024-12-12 10:31:01.146111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:17:27.887 [2024-12-12 10:31:01.146238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.887 [2024-12-12 10:31:01.146239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:17:27.887 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.887 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:27.887 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:27.887 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.887 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.887 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.887 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.887 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.887 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.887 [2024-12-12 10:31:01.906742] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.146 Malloc0 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.146 [2024-12-12 10:31:01.951059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:28.146 { 00:17:28.146 "params": { 00:17:28.146 "name": "Nvme$subsystem", 00:17:28.146 "trtype": "$TEST_TRANSPORT", 00:17:28.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.146 "adrfam": "ipv4", 00:17:28.146 "trsvcid": "$NVMF_PORT", 00:17:28.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.146 "hdgst": ${hdgst:-false}, 00:17:28.146 "ddgst": ${ddgst:-false} 00:17:28.146 }, 00:17:28.146 "method": "bdev_nvme_attach_controller" 00:17:28.146 } 00:17:28.146 EOF 00:17:28.146 )") 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:28.146 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:28.146 "params": { 00:17:28.146 "name": "Nvme1", 00:17:28.146 "trtype": "tcp", 00:17:28.146 "traddr": "10.0.0.2", 00:17:28.146 "adrfam": "ipv4", 00:17:28.146 "trsvcid": "4420", 00:17:28.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.146 "hdgst": false, 00:17:28.146 "ddgst": false 00:17:28.146 }, 00:17:28.146 "method": "bdev_nvme_attach_controller" 00:17:28.146 }' 00:17:28.146 [2024-12-12 10:31:02.001343] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:28.146 [2024-12-12 10:31:02.001387] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1524876 ] 00:17:28.146 [2024-12-12 10:31:02.079276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:28.146 [2024-12-12 10:31:02.127195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.146 [2024-12-12 10:31:02.127303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.146 [2024-12-12 10:31:02.127303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.712 I/O targets: 00:17:28.712 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:28.712 00:17:28.712 00:17:28.712 CUnit - A unit testing framework for C - Version 2.1-3 00:17:28.712 http://cunit.sourceforge.net/ 00:17:28.712 00:17:28.713 00:17:28.713 Suite: bdevio tests on: Nvme1n1 00:17:28.713 Test: blockdev write read block ...passed 00:17:28.713 Test: blockdev write zeroes read block ...passed 00:17:28.713 Test: blockdev write zeroes read no split ...passed 00:17:28.713 Test: blockdev write zeroes read split ...passed 00:17:28.713 Test: blockdev write zeroes read split partial ...passed 00:17:28.713 Test: blockdev reset ...[2024-12-12 10:31:02.613809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:28.713 [2024-12-12 10:31:02.613870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116cd30 (9): Bad file descriptor 00:17:28.713 [2024-12-12 10:31:02.629293] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:28.713 passed 00:17:28.713 Test: blockdev write read 8 blocks ...passed 00:17:28.713 Test: blockdev write read size > 128k ...passed 00:17:28.713 Test: blockdev write read invalid size ...passed 00:17:28.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.713 Test: blockdev write read max offset ...passed 00:17:28.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:28.971 Test: blockdev writev readv 8 blocks ...passed 00:17:28.971 Test: blockdev writev readv 30 x 1block ...passed 00:17:28.971 Test: blockdev writev readv block ...passed 00:17:28.971 Test: blockdev writev readv size > 128k ...passed 00:17:28.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:28.971 Test: blockdev comparev and writev ...[2024-12-12 10:31:02.881363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.971 [2024-12-12 10:31:02.881396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.881410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.971 [2024-12-12 10:31:02.881418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.881657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.971 [2024-12-12 10:31:02.881668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.881679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.971 [2024-12-12 10:31:02.881686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.881933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.971 [2024-12-12 10:31:02.881946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.881958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.971 [2024-12-12 10:31:02.881965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.882185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.971 [2024-12-12 10:31:02.882195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.882206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:28.971 [2024-12-12 10:31:02.882212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.971 passed 00:17:28.971 Test: blockdev nvme passthru rw ...passed 00:17:28.971 Test: blockdev nvme passthru vendor specific ...[2024-12-12 10:31:02.963873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.971 [2024-12-12 10:31:02.963890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.963995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.971 [2024-12-12 10:31:02.964005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.964106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.971 [2024-12-12 10:31:02.964115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.971 [2024-12-12 10:31:02.964212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.971 [2024-12-12 10:31:02.964221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.971 passed 00:17:28.971 Test: blockdev nvme admin passthru ...passed 00:17:29.230 Test: blockdev copy ...passed 00:17:29.230 00:17:29.230 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.230 suites 1 1 n/a 0 0 00:17:29.230 tests 23 23 23 0 0 00:17:29.230 asserts 152 152 152 0 n/a 00:17:29.230 00:17:29.230 Elapsed time = 1.141 seconds 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.489 rmmod nvme_tcp 00:17:29.489 rmmod nvme_fabrics 00:17:29.489 rmmod nvme_keyring 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1524724 ']' 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1524724 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1524724 ']' 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1524724 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1524724 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1524724' 00:17:29.489 killing process with pid 1524724 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1524724 00:17:29.489 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1524724 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.748 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:32.281 00:17:32.281 real 0m10.929s 00:17:32.281 user 0m14.271s 00:17:32.281 sys 0m5.292s 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.281 ************************************ 00:17:32.281 END TEST nvmf_bdevio_no_huge 00:17:32.281 ************************************ 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.281 ************************************ 00:17:32.281 START TEST nvmf_tls 00:17:32.281 ************************************ 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:32.281 * Looking for test storage... 00:17:32.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.281 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:32.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.281 --rc genhtml_branch_coverage=1 00:17:32.281 --rc genhtml_function_coverage=1 00:17:32.281 --rc genhtml_legend=1 00:17:32.281 --rc geninfo_all_blocks=1 00:17:32.281 --rc geninfo_unexecuted_blocks=1 00:17:32.281 00:17:32.281 ' 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:32.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.281 --rc genhtml_branch_coverage=1 00:17:32.281 --rc genhtml_function_coverage=1 00:17:32.281 --rc genhtml_legend=1 00:17:32.281 --rc geninfo_all_blocks=1 00:17:32.281 --rc geninfo_unexecuted_blocks=1 00:17:32.281 00:17:32.281 ' 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:32.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.281 --rc genhtml_branch_coverage=1 00:17:32.281 --rc genhtml_function_coverage=1 00:17:32.281 --rc genhtml_legend=1 00:17:32.281 --rc geninfo_all_blocks=1 00:17:32.281 --rc geninfo_unexecuted_blocks=1 00:17:32.281 00:17:32.281 ' 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:32.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.281 --rc genhtml_branch_coverage=1 00:17:32.281 --rc genhtml_function_coverage=1 00:17:32.281 --rc genhtml_legend=1 00:17:32.281 --rc geninfo_all_blocks=1 00:17:32.281 --rc geninfo_unexecuted_blocks=1 00:17:32.281 00:17:32.281 ' 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.281 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:32.282 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:38.858 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:38.858 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.858 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:38.859 Found net devices under 0000:af:00.0: cvl_0_0 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:38.859 Found net devices under 0000:af:00.1: cvl_0_1 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:38.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:17:38.859 00:17:38.859 --- 10.0.0.2 ping statistics --- 00:17:38.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.859 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:17:38.859 00:17:38.859 --- 10.0.0.1 ping statistics --- 00:17:38.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.859 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1528663 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1528663 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1528663 ']' 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.859 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.859 [2024-12-12 10:31:12.033033] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:38.859 [2024-12-12 10:31:12.033073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.859 [2024-12-12 10:31:12.112526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.860 [2024-12-12 10:31:12.151601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.860 [2024-12-12 10:31:12.151632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.860 [2024-12-12 10:31:12.151639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.860 [2024-12-12 10:31:12.151645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.860 [2024-12-12 10:31:12.151650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.860 [2024-12-12 10:31:12.152143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:38.860 true 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.860 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:39.118 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:39.118 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:39.118 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:39.377 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:39.377 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:39.377 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:39.377 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:39.377 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:39.377 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:39.636 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:39.636 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:39.636 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:39.895 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:39.895 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:39.895 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:39.895 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:39.895 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:40.154 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:40.154 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.muzr3IAila 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.KrssVhLo4a 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.muzr3IAila 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.KrssVhLo4a 00:17:40.413 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:40.671 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:40.930 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.muzr3IAila 00:17:40.930 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.muzr3IAila 00:17:40.930 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:41.189 [2024-12-12 10:31:14.998518] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.189 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:41.189 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:41.461 [2024-12-12 10:31:15.375497] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:41.461 [2024-12-12 10:31:15.375700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.461 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:41.738 malloc0 00:17:41.738 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:41.999 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.muzr3IAila 00:17:42.000 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:42.258 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.muzr3IAila 00:17:52.235 Initializing NVMe Controllers 00:17:52.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:52.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:52.235 Initialization complete. Launching workers. 00:17:52.235 ======================================================== 00:17:52.235 Latency(us) 00:17:52.235 Device Information : IOPS MiB/s Average min max 00:17:52.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16814.07 65.68 3806.40 906.36 5209.32 00:17:52.235 ======================================================== 00:17:52.235 Total : 16814.07 65.68 3806.40 906.36 5209.32 00:17:52.235 00:17:52.235 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.muzr3IAila 00:17:52.235 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.235 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:52.235 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.235 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.muzr3IAila 00:17:52.235 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1530955 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1530955 /var/tmp/bdevperf.sock 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1530955 ']' 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.494 [2024-12-12 10:31:26.301812] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:17:52.494 [2024-12-12 10:31:26.301862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530955 ] 00:17:52.494 [2024-12-12 10:31:26.375884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.494 [2024-12-12 10:31:26.418662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:52.494 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.muzr3IAila 00:17:52.752 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:53.011 [2024-12-12 10:31:26.867806] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.011 TLSTESTn1 00:17:53.011 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:53.269 Running I/O for 10 seconds... 00:17:55.141 5317.00 IOPS, 20.77 MiB/s [2024-12-12T09:31:30.100Z] 5418.50 IOPS, 21.17 MiB/s [2024-12-12T09:31:31.477Z] 5485.33 IOPS, 21.43 MiB/s [2024-12-12T09:31:32.413Z] 5501.00 IOPS, 21.49 MiB/s [2024-12-12T09:31:33.348Z] 5531.20 IOPS, 21.61 MiB/s [2024-12-12T09:31:34.284Z] 5548.67 IOPS, 21.67 MiB/s [2024-12-12T09:31:35.220Z] 5566.14 IOPS, 21.74 MiB/s [2024-12-12T09:31:36.155Z] 5525.75 IOPS, 21.58 MiB/s [2024-12-12T09:31:37.091Z] 5467.22 IOPS, 21.36 MiB/s [2024-12-12T09:31:37.349Z] 5431.10 IOPS, 21.22 MiB/s 00:18:03.326 Latency(us) 00:18:03.326 [2024-12-12T09:31:37.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.327 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:03.327 Verification LBA range: start 0x0 length 0x2000 00:18:03.327 TLSTESTn1 : 10.02 5432.00 21.22 0.00 0.00 23524.56 4587.52 50431.51 00:18:03.327 [2024-12-12T09:31:37.350Z] =================================================================================================================== 00:18:03.327 [2024-12-12T09:31:37.350Z] Total : 5432.00 21.22 0.00 0.00 23524.56 4587.52 50431.51 00:18:03.327 { 00:18:03.327 "results": [ 00:18:03.327 { 00:18:03.327 "job": "TLSTESTn1", 00:18:03.327 "core_mask": "0x4", 00:18:03.327 "workload": "verify", 00:18:03.327 "status": "finished", 00:18:03.327 "verify_range": { 00:18:03.327 "start": 0, 00:18:03.327 "length": 8192 00:18:03.327 }, 00:18:03.327 "queue_depth": 128, 00:18:03.327 "io_size": 4096, 00:18:03.327 "runtime": 10.021731, 00:18:03.327 "iops": 5431.995730078966, 00:18:03.327 "mibps": 21.21873332062096, 00:18:03.327 "io_failed": 0, 00:18:03.327 "io_timeout": 0, 00:18:03.327 "avg_latency_us": 23524.559546675206, 00:18:03.327 "min_latency_us": 4587.52, 00:18:03.327 "max_latency_us": 50431.51238095238 00:18:03.327 } 00:18:03.327 ], 00:18:03.327 "core_count": 1 00:18:03.327 } 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1530955 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1530955 ']' 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1530955 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1530955 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1530955' 00:18:03.327 killing process with pid 1530955 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1530955 00:18:03.327 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.327 00:18:03.327 Latency(us) 00:18:03.327 [2024-12-12T09:31:37.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.327 [2024-12-12T09:31:37.350Z] =================================================================================================================== 00:18:03.327 [2024-12-12T09:31:37.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1530955 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KrssVhLo4a 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KrssVhLo4a 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KrssVhLo4a 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KrssVhLo4a 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1532741 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1532741 /var/tmp/bdevperf.sock 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1532741 ']' 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.327 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.586 [2024-12-12 10:31:37.388644] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:03.586 [2024-12-12 10:31:37.388692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532741 ] 00:18:03.586 [2024-12-12 10:31:37.459942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.586 [2024-12-12 10:31:37.496881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.586 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.586 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:03.586 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KrssVhLo4a 00:18:03.845 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.104 [2024-12-12 10:31:37.949336] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.104 [2024-12-12 10:31:37.953953] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:04.104 [2024-12-12 10:31:37.954608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b61410 (107): Transport endpoint is not connected 00:18:04.104 [2024-12-12 10:31:37.955600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b61410 (9): Bad file descriptor 00:18:04.104 [2024-12-12 10:31:37.956602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:04.104 [2024-12-12 10:31:37.956611] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:04.104 [2024-12-12 10:31:37.956619] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:04.104 [2024-12-12 10:31:37.956630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:04.104 request: 00:18:04.104 { 00:18:04.104 "name": "TLSTEST", 00:18:04.104 "trtype": "tcp", 00:18:04.104 "traddr": "10.0.0.2", 00:18:04.104 "adrfam": "ipv4", 00:18:04.104 "trsvcid": "4420", 00:18:04.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.104 "prchk_reftag": false, 00:18:04.104 "prchk_guard": false, 00:18:04.104 "hdgst": false, 00:18:04.104 "ddgst": false, 00:18:04.104 "psk": "key0", 00:18:04.104 "allow_unrecognized_csi": false, 00:18:04.104 "method": "bdev_nvme_attach_controller", 00:18:04.104 "req_id": 1 00:18:04.104 } 00:18:04.104 Got JSON-RPC error response 00:18:04.104 response: 00:18:04.104 { 00:18:04.104 "code": -5, 00:18:04.104 "message": "Input/output error" 00:18:04.104 } 00:18:04.104 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1532741 00:18:04.104 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1532741 ']' 00:18:04.104 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1532741 00:18:04.104 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:04.104 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.104 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1532741 00:18:04.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:04.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:04.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1532741' 00:18:04.104 killing process with pid 1532741 00:18:04.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1532741 00:18:04.104 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.104 00:18:04.104 Latency(us) 00:18:04.104 [2024-12-12T09:31:38.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.104 [2024-12-12T09:31:38.127Z] =================================================================================================================== 00:18:04.104 [2024-12-12T09:31:38.127Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1532741 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.muzr3IAila 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.muzr3IAila 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.muzr3IAila 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.muzr3IAila 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.363 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1532848 00:18:04.364 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.364 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.364 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1532848 /var/tmp/bdevperf.sock 00:18:04.364 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1532848 ']' 00:18:04.364 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.364 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.364 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.364 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.364 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.364 [2024-12-12 10:31:38.236001] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:04.364 [2024-12-12 10:31:38.236052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532848 ] 00:18:04.364 [2024-12-12 10:31:38.308526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.364 [2024-12-12 10:31:38.347582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.623 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.623 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:04.623 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.muzr3IAila 00:18:04.623 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:04.882 [2024-12-12 10:31:38.811754] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.882 [2024-12-12 10:31:38.816244] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:04.882 [2024-12-12 10:31:38.816265] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:04.882 [2024-12-12 10:31:38.816287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:04.882 [2024-12-12 10:31:38.816970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a96410 (107): Transport endpoint is not connected 00:18:04.882 [2024-12-12 10:31:38.817962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a96410 (9): Bad file descriptor 00:18:04.882 [2024-12-12 10:31:38.818964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:04.882 [2024-12-12 10:31:38.818979] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:04.882 [2024-12-12 10:31:38.818986] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:04.882 [2024-12-12 10:31:38.818994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:04.882 request: 00:18:04.882 { 00:18:04.882 "name": "TLSTEST", 00:18:04.882 "trtype": "tcp", 00:18:04.882 "traddr": "10.0.0.2", 00:18:04.882 "adrfam": "ipv4", 00:18:04.882 "trsvcid": "4420", 00:18:04.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.882 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:04.882 "prchk_reftag": false, 00:18:04.882 "prchk_guard": false, 00:18:04.882 "hdgst": false, 00:18:04.882 "ddgst": false, 00:18:04.882 "psk": "key0", 00:18:04.882 "allow_unrecognized_csi": false, 00:18:04.882 "method": "bdev_nvme_attach_controller", 00:18:04.882 "req_id": 1 00:18:04.882 } 00:18:04.882 Got JSON-RPC error response 00:18:04.882 response: 00:18:04.882 { 00:18:04.882 "code": -5, 00:18:04.882 "message": "Input/output error" 00:18:04.882 } 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1532848 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1532848 ']' 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1532848 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1532848 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1532848' 00:18:04.882 killing process with pid 1532848 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1532848 00:18:04.882 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.882 00:18:04.882 Latency(us) 00:18:04.882 [2024-12-12T09:31:38.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.882 [2024-12-12T09:31:38.905Z] =================================================================================================================== 00:18:04.882 [2024-12-12T09:31:38.905Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.882 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1532848 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.muzr3IAila 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.muzr3IAila 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.muzr3IAila 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.muzr3IAila 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1532986 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.141 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.142 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1532986 /var/tmp/bdevperf.sock 00:18:05.142 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1532986 ']' 00:18:05.142 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.142 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.142 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.142 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.142 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.142 [2024-12-12 10:31:39.097743] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:05.142 [2024-12-12 10:31:39.097791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532986 ] 00:18:05.142 [2024-12-12 10:31:39.162968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.400 [2024-12-12 10:31:39.200533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.400 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.400 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:05.400 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.muzr3IAila 00:18:05.659 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:05.918 [2024-12-12 10:31:39.684939] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.918 [2024-12-12 10:31:39.696444] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:05.918 [2024-12-12 10:31:39.696464] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:05.918 [2024-12-12 10:31:39.696485] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:05.918 [2024-12-12 10:31:39.697289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac9410 (107): Transport endpoint is not connected 00:18:05.918 [2024-12-12 10:31:39.698282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac9410 (9): Bad file descriptor 00:18:05.918 [2024-12-12 10:31:39.699284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:05.918 [2024-12-12 10:31:39.699293] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:05.918 [2024-12-12 10:31:39.699300] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:05.918 [2024-12-12 10:31:39.699307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:05.918 request: 00:18:05.918 { 00:18:05.918 "name": "TLSTEST", 00:18:05.918 "trtype": "tcp", 00:18:05.918 "traddr": "10.0.0.2", 00:18:05.918 "adrfam": "ipv4", 00:18:05.918 "trsvcid": "4420", 00:18:05.918 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:05.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.919 "prchk_reftag": false, 00:18:05.919 "prchk_guard": false, 00:18:05.919 "hdgst": false, 00:18:05.919 "ddgst": false, 00:18:05.919 "psk": "key0", 00:18:05.919 "allow_unrecognized_csi": false, 00:18:05.919 "method": "bdev_nvme_attach_controller", 00:18:05.919 "req_id": 1 00:18:05.919 } 00:18:05.919 Got JSON-RPC error response 00:18:05.919 response: 00:18:05.919 { 00:18:05.919 "code": -5, 00:18:05.919 "message": "Input/output error" 00:18:05.919 } 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1532986 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1532986 ']' 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1532986 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1532986 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1532986' 00:18:05.919 killing process with pid 1532986 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1532986 00:18:05.919 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.919 00:18:05.919 Latency(us) 00:18:05.919 [2024-12-12T09:31:39.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.919 [2024-12-12T09:31:39.942Z] =================================================================================================================== 00:18:05.919 [2024-12-12T09:31:39.942Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1532986 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1533213 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1533213 /var/tmp/bdevperf.sock 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1533213 ']' 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.919 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.177 [2024-12-12 10:31:39.979641] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:06.177 [2024-12-12 10:31:39.979690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533213 ] 00:18:06.177 [2024-12-12 10:31:40.055831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.177 [2024-12-12 10:31:40.097920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.177 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.177 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:06.177 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:06.436 [2024-12-12 10:31:40.365681] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:06.436 [2024-12-12 10:31:40.365713] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:06.436 request: 00:18:06.436 { 00:18:06.436 "name": "key0", 00:18:06.436 "path": "", 00:18:06.436 "method": "keyring_file_add_key", 00:18:06.436 "req_id": 1 00:18:06.436 } 00:18:06.436 Got JSON-RPC error response 00:18:06.436 response: 00:18:06.436 { 00:18:06.436 "code": -1, 00:18:06.436 "message": "Operation not permitted" 00:18:06.436 } 00:18:06.436 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:06.695 [2024-12-12 10:31:40.562271] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.695 [2024-12-12 10:31:40.562302] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:06.695 request: 00:18:06.695 { 00:18:06.695 "name": "TLSTEST", 00:18:06.695 "trtype": "tcp", 00:18:06.695 "traddr": "10.0.0.2", 00:18:06.695 "adrfam": "ipv4", 00:18:06.695 "trsvcid": "4420", 00:18:06.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.695 "prchk_reftag": false, 00:18:06.695 "prchk_guard": false, 00:18:06.695 "hdgst": false, 00:18:06.695 "ddgst": false, 00:18:06.695 "psk": "key0", 00:18:06.695 "allow_unrecognized_csi": false, 00:18:06.695 "method": "bdev_nvme_attach_controller", 00:18:06.695 "req_id": 1 00:18:06.695 } 00:18:06.695 Got JSON-RPC error response 00:18:06.695 response: 00:18:06.695 { 00:18:06.695 "code": -126, 00:18:06.695 "message": "Required key not available" 00:18:06.695 } 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1533213 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1533213 ']' 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1533213 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533213 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533213' 00:18:06.695 killing process with pid 1533213 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1533213 00:18:06.695 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.695 00:18:06.695 Latency(us) 00:18:06.695 [2024-12-12T09:31:40.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.695 [2024-12-12T09:31:40.718Z] =================================================================================================================== 00:18:06.695 [2024-12-12T09:31:40.718Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:06.695 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1533213 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1528663 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1528663 ']' 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1528663 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1528663 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1528663' 00:18:06.955 killing process with pid 1528663 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1528663 00:18:06.955 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1528663 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.kXyiquXCpP 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.kXyiquXCpP 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1533400 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1533400 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1533400 ']' 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.214 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.214 [2024-12-12 10:31:41.112958] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:07.214 [2024-12-12 10:31:41.113005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.214 [2024-12-12 10:31:41.189271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.214 [2024-12-12 10:31:41.227002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.214 [2024-12-12 10:31:41.227037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.214 [2024-12-12 10:31:41.227044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.214 [2024-12-12 10:31:41.227049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.214 [2024-12-12 10:31:41.227054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.214 [2024-12-12 10:31:41.227560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.473 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.473 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:07.473 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:07.473 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.473 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.473 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.473 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.kXyiquXCpP 00:18:07.473 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kXyiquXCpP 00:18:07.473 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:07.732 [2024-12-12 10:31:41.535027] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.732 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:07.991 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:07.991 [2024-12-12 10:31:41.924015] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.991 [2024-12-12 10:31:41.924231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.991 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:08.250 malloc0 00:18:08.250 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:08.509 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kXyiquXCpP 00:18:08.509 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:08.767 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kXyiquXCpP 00:18:08.767 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kXyiquXCpP 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1533703 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1533703 /var/tmp/bdevperf.sock 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1533703 ']' 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.768 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.768 [2024-12-12 10:31:42.769722] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:08.768 [2024-12-12 10:31:42.769770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533703 ] 00:18:09.026 [2024-12-12 10:31:42.842698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.026 [2024-12-12 10:31:42.882562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.026 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.026 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.026 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kXyiquXCpP 00:18:09.285 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:09.544 [2024-12-12 10:31:43.327276] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.544 TLSTESTn1 00:18:09.544 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:09.544 Running I/O for 10 seconds... 00:18:11.856 5342.00 IOPS, 20.87 MiB/s [2024-12-12T09:31:46.815Z] 5397.50 IOPS, 21.08 MiB/s [2024-12-12T09:31:47.751Z] 5457.33 IOPS, 21.32 MiB/s [2024-12-12T09:31:48.688Z] 5385.25 IOPS, 21.04 MiB/s [2024-12-12T09:31:49.624Z] 5299.60 IOPS, 20.70 MiB/s [2024-12-12T09:31:50.560Z] 5255.67 IOPS, 20.53 MiB/s [2024-12-12T09:31:51.938Z] 5210.86 IOPS, 20.35 MiB/s [2024-12-12T09:31:52.873Z] 5159.50 IOPS, 20.15 MiB/s [2024-12-12T09:31:53.810Z] 5132.33 IOPS, 20.05 MiB/s [2024-12-12T09:31:53.810Z] 5102.30 IOPS, 19.93 MiB/s 00:18:19.787 Latency(us) 00:18:19.787 [2024-12-12T09:31:53.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.787 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:19.787 Verification LBA range: start 0x0 length 0x2000 00:18:19.787 TLSTESTn1 : 10.02 5106.30 19.95 0.00 0.00 25030.42 5835.82 33704.23 00:18:19.787 [2024-12-12T09:31:53.810Z] =================================================================================================================== 00:18:19.787 [2024-12-12T09:31:53.810Z] Total : 5106.30 19.95 0.00 0.00 25030.42 5835.82 33704.23 00:18:19.787 { 00:18:19.787 "results": [ 00:18:19.787 { 00:18:19.787 "job": "TLSTESTn1", 00:18:19.787 "core_mask": "0x4", 00:18:19.787 "workload": "verify", 00:18:19.787 "status": "finished", 00:18:19.787 "verify_range": { 00:18:19.787 "start": 0, 00:18:19.787 "length": 8192 00:18:19.787 }, 00:18:19.787 "queue_depth": 128, 00:18:19.787 "io_size": 4096, 00:18:19.787 "runtime": 10.017238, 00:18:19.787 "iops": 5106.297763914564, 00:18:19.787 "mibps": 19.946475640291265, 00:18:19.787 "io_failed": 0, 00:18:19.787 "io_timeout": 0, 00:18:19.787 "avg_latency_us": 25030.416452650465, 00:18:19.787 "min_latency_us": 5835.8247619047615, 00:18:19.787 "max_latency_us": 33704.22857142857 00:18:19.787 } 00:18:19.787 ], 00:18:19.787 "core_count": 1 00:18:19.787 } 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1533703 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1533703 ']' 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1533703 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533703 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533703' 00:18:19.787 killing process with pid 1533703 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1533703 00:18:19.787 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.787 00:18:19.787 Latency(us) 00:18:19.787 [2024-12-12T09:31:53.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.787 [2024-12-12T09:31:53.810Z] =================================================================================================================== 00:18:19.787 [2024-12-12T09:31:53.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1533703 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.kXyiquXCpP 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kXyiquXCpP 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kXyiquXCpP 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kXyiquXCpP 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kXyiquXCpP 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1535459 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1535459 /var/tmp/bdevperf.sock 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1535459 ']' 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.787 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.046 [2024-12-12 10:31:53.830794] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:20.046 [2024-12-12 10:31:53.830842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535459 ] 00:18:20.046 [2024-12-12 10:31:53.902443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.046 [2024-12-12 10:31:53.942067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.046 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.046 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.046 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kXyiquXCpP 00:18:20.304 [2024-12-12 10:31:54.205999] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kXyiquXCpP': 0100666 00:18:20.304 [2024-12-12 10:31:54.206031] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:20.304 request: 00:18:20.304 { 00:18:20.304 "name": "key0", 00:18:20.304 "path": "/tmp/tmp.kXyiquXCpP", 00:18:20.304 "method": "keyring_file_add_key", 00:18:20.304 "req_id": 1 00:18:20.304 } 00:18:20.304 Got JSON-RPC error response 00:18:20.304 response: 00:18:20.305 { 00:18:20.305 "code": -1, 00:18:20.305 "message": "Operation not permitted" 00:18:20.305 } 00:18:20.305 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.563 [2024-12-12 10:31:54.406605] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.563 [2024-12-12 10:31:54.406634] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:20.563 request: 00:18:20.563 { 00:18:20.563 "name": "TLSTEST", 00:18:20.563 "trtype": "tcp", 00:18:20.563 "traddr": "10.0.0.2", 00:18:20.563 "adrfam": "ipv4", 00:18:20.563 "trsvcid": "4420", 00:18:20.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.563 "prchk_reftag": false, 00:18:20.563 "prchk_guard": false, 00:18:20.563 "hdgst": false, 00:18:20.563 "ddgst": false, 00:18:20.563 "psk": "key0", 00:18:20.563 "allow_unrecognized_csi": false, 00:18:20.563 "method": "bdev_nvme_attach_controller", 00:18:20.563 "req_id": 1 00:18:20.563 } 00:18:20.563 Got JSON-RPC error response 00:18:20.563 response: 00:18:20.563 { 00:18:20.563 "code": -126, 00:18:20.563 "message": "Required key not available" 00:18:20.563 } 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1535459 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1535459 ']' 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1535459 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535459 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535459' 00:18:20.563 killing process with pid 1535459 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1535459 00:18:20.563 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.563 00:18:20.563 Latency(us) 00:18:20.563 [2024-12-12T09:31:54.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.563 [2024-12-12T09:31:54.586Z] =================================================================================================================== 00:18:20.563 [2024-12-12T09:31:54.586Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.563 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1535459 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1533400 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1533400 ']' 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1533400 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533400 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533400' 00:18:20.823 killing process with pid 1533400 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1533400 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1533400 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.823 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.081 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1535521 00:18:21.081 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.081 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1535521 00:18:21.081 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1535521 ']' 00:18:21.081 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.081 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.081 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.081 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.081 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.081 [2024-12-12 10:31:54.892751] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:21.081 [2024-12-12 10:31:54.892800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.081 [2024-12-12 10:31:54.972434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.081 [2024-12-12 10:31:55.012738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.081 [2024-12-12 10:31:55.012772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.081 [2024-12-12 10:31:55.012780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.081 [2024-12-12 10:31:55.012791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.081 [2024-12-12 10:31:55.012797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.081 [2024-12-12 10:31:55.013308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.kXyiquXCpP 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.kXyiquXCpP 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.kXyiquXCpP 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kXyiquXCpP 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:21.340 [2024-12-12 10:31:55.326072] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.340 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:21.598 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:21.857 [2024-12-12 10:31:55.711053] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.857 [2024-12-12 10:31:55.711266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.857 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:22.115 malloc0 00:18:22.115 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:22.115 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kXyiquXCpP 00:18:22.374 [2024-12-12 10:31:56.300484] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kXyiquXCpP': 0100666 00:18:22.374 [2024-12-12 10:31:56.300507] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:22.374 request: 00:18:22.374 { 00:18:22.374 "name": "key0", 00:18:22.374 "path": "/tmp/tmp.kXyiquXCpP", 00:18:22.374 "method": "keyring_file_add_key", 00:18:22.374 "req_id": 1 00:18:22.374 } 00:18:22.374 Got JSON-RPC error response 00:18:22.374 response: 00:18:22.374 { 00:18:22.374 "code": -1, 00:18:22.374 "message": "Operation not permitted" 00:18:22.374 } 00:18:22.374 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.633 [2024-12-12 10:31:56.505048] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:22.633 [2024-12-12 10:31:56.505081] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:22.633 request: 00:18:22.633 { 00:18:22.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.633 "host": "nqn.2016-06.io.spdk:host1", 00:18:22.633 "psk": "key0", 00:18:22.633 "method": "nvmf_subsystem_add_host", 00:18:22.634 "req_id": 1 00:18:22.634 } 00:18:22.634 Got JSON-RPC error response 00:18:22.634 response: 00:18:22.634 { 00:18:22.634 "code": -32603, 00:18:22.634 "message": "Internal error" 00:18:22.634 } 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1535521 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1535521 ']' 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1535521 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535521 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535521' 00:18:22.634 killing process with pid 1535521 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1535521 00:18:22.634 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1535521 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.kXyiquXCpP 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1535984 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1535984 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1535984 ']' 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.893 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.893 [2024-12-12 10:31:56.804833] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:22.893 [2024-12-12 10:31:56.804878] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.893 [2024-12-12 10:31:56.879724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.151 [2024-12-12 10:31:56.918982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.151 [2024-12-12 10:31:56.919015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.151 [2024-12-12 10:31:56.919022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.151 [2024-12-12 10:31:56.919029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.151 [2024-12-12 10:31:56.919034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.151 [2024-12-12 10:31:56.919512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.152 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.152 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:23.152 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.152 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.152 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.152 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.152 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.kXyiquXCpP 00:18:23.152 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kXyiquXCpP 00:18:23.152 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.410 [2024-12-12 10:31:57.227561] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.410 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:23.669 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:23.669 [2024-12-12 10:31:57.616562] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.669 [2024-12-12 10:31:57.616773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.669 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.928 malloc0 00:18:23.928 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.186 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kXyiquXCpP 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1536236 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1536236 /var/tmp/bdevperf.sock 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1536236 ']' 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.445 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.445 [2024-12-12 10:31:58.463894] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:24.445 [2024-12-12 10:31:58.463941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536236 ] 00:18:24.704 [2024-12-12 10:31:58.537049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.704 [2024-12-12 10:31:58.576777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.704 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.704 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.704 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kXyiquXCpP 00:18:24.963 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.221 [2024-12-12 10:31:59.021219] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.221 TLSTESTn1 00:18:25.221 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:25.480 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:25.480 "subsystems": [ 00:18:25.480 { 00:18:25.480 "subsystem": "keyring", 00:18:25.480 "config": [ 00:18:25.480 { 00:18:25.480 "method": "keyring_file_add_key", 00:18:25.480 "params": { 00:18:25.480 "name": "key0", 00:18:25.480 "path": "/tmp/tmp.kXyiquXCpP" 00:18:25.480 } 00:18:25.480 } 00:18:25.480 ] 00:18:25.480 }, 00:18:25.480 { 00:18:25.480 "subsystem": "iobuf", 00:18:25.480 "config": [ 00:18:25.480 { 00:18:25.480 "method": "iobuf_set_options", 00:18:25.480 "params": { 00:18:25.480 "small_pool_count": 8192, 00:18:25.480 "large_pool_count": 1024, 00:18:25.480 "small_bufsize": 8192, 00:18:25.480 "large_bufsize": 135168, 00:18:25.480 "enable_numa": false 00:18:25.480 } 00:18:25.480 } 00:18:25.480 ] 00:18:25.480 }, 00:18:25.480 { 00:18:25.480 "subsystem": "sock", 00:18:25.480 "config": [ 00:18:25.480 { 00:18:25.480 "method": "sock_set_default_impl", 00:18:25.480 "params": { 00:18:25.480 "impl_name": "posix" 00:18:25.480 } 00:18:25.480 }, 00:18:25.480 { 00:18:25.480 "method": "sock_impl_set_options", 00:18:25.480 "params": { 00:18:25.480 "impl_name": "ssl", 00:18:25.480 "recv_buf_size": 4096, 00:18:25.480 "send_buf_size": 4096, 00:18:25.480 "enable_recv_pipe": true, 00:18:25.480 "enable_quickack": false, 00:18:25.480 "enable_placement_id": 0, 00:18:25.480 "enable_zerocopy_send_server": true, 00:18:25.480 "enable_zerocopy_send_client": false, 00:18:25.480 "zerocopy_threshold": 0, 00:18:25.480 "tls_version": 0, 00:18:25.480 "enable_ktls": false 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "sock_impl_set_options", 00:18:25.481 "params": { 00:18:25.481 "impl_name": "posix", 00:18:25.481 "recv_buf_size": 2097152, 00:18:25.481 "send_buf_size": 2097152, 00:18:25.481 "enable_recv_pipe": true, 00:18:25.481 "enable_quickack": false, 00:18:25.481 "enable_placement_id": 0, 00:18:25.481 "enable_zerocopy_send_server": true, 00:18:25.481 "enable_zerocopy_send_client": false, 00:18:25.481 "zerocopy_threshold": 0, 00:18:25.481 "tls_version": 0, 00:18:25.481 "enable_ktls": false 00:18:25.481 } 00:18:25.481 } 00:18:25.481 ] 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "subsystem": "vmd", 00:18:25.481 "config": [] 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "subsystem": "accel", 00:18:25.481 "config": [ 00:18:25.481 { 00:18:25.481 "method": "accel_set_options", 00:18:25.481 "params": { 00:18:25.481 "small_cache_size": 128, 00:18:25.481 "large_cache_size": 16, 00:18:25.481 "task_count": 2048, 00:18:25.481 "sequence_count": 2048, 00:18:25.481 "buf_count": 2048 00:18:25.481 } 00:18:25.481 } 00:18:25.481 ] 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "subsystem": "bdev", 00:18:25.481 "config": [ 00:18:25.481 { 00:18:25.481 "method": "bdev_set_options", 00:18:25.481 "params": { 00:18:25.481 "bdev_io_pool_size": 65535, 00:18:25.481 "bdev_io_cache_size": 256, 00:18:25.481 "bdev_auto_examine": true, 00:18:25.481 "iobuf_small_cache_size": 128, 00:18:25.481 "iobuf_large_cache_size": 16 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "bdev_raid_set_options", 00:18:25.481 "params": { 00:18:25.481 "process_window_size_kb": 1024, 00:18:25.481 "process_max_bandwidth_mb_sec": 0 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "bdev_iscsi_set_options", 00:18:25.481 "params": { 00:18:25.481 "timeout_sec": 30 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "bdev_nvme_set_options", 00:18:25.481 "params": { 00:18:25.481 "action_on_timeout": "none", 00:18:25.481 "timeout_us": 0, 00:18:25.481 "timeout_admin_us": 0, 00:18:25.481 "keep_alive_timeout_ms": 10000, 00:18:25.481 "arbitration_burst": 0, 00:18:25.481 "low_priority_weight": 0, 00:18:25.481 "medium_priority_weight": 0, 00:18:25.481 "high_priority_weight": 0, 00:18:25.481 "nvme_adminq_poll_period_us": 10000, 00:18:25.481 "nvme_ioq_poll_period_us": 0, 00:18:25.481 "io_queue_requests": 0, 00:18:25.481 "delay_cmd_submit": true, 00:18:25.481 "transport_retry_count": 4, 00:18:25.481 "bdev_retry_count": 3, 00:18:25.481 "transport_ack_timeout": 0, 00:18:25.481 "ctrlr_loss_timeout_sec": 0, 00:18:25.481 "reconnect_delay_sec": 0, 00:18:25.481 "fast_io_fail_timeout_sec": 0, 00:18:25.481 "disable_auto_failback": false, 00:18:25.481 "generate_uuids": false, 00:18:25.481 "transport_tos": 0, 00:18:25.481 "nvme_error_stat": false, 00:18:25.481 "rdma_srq_size": 0, 00:18:25.481 "io_path_stat": false, 00:18:25.481 "allow_accel_sequence": false, 00:18:25.481 "rdma_max_cq_size": 0, 00:18:25.481 "rdma_cm_event_timeout_ms": 0, 00:18:25.481 "dhchap_digests": [ 00:18:25.481 "sha256", 00:18:25.481 "sha384", 00:18:25.481 "sha512" 00:18:25.481 ], 00:18:25.481 "dhchap_dhgroups": [ 00:18:25.481 "null", 00:18:25.481 "ffdhe2048", 00:18:25.481 "ffdhe3072", 00:18:25.481 "ffdhe4096", 00:18:25.481 "ffdhe6144", 00:18:25.481 "ffdhe8192" 00:18:25.481 ], 00:18:25.481 "rdma_umr_per_io": false 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "bdev_nvme_set_hotplug", 00:18:25.481 "params": { 00:18:25.481 "period_us": 100000, 00:18:25.481 "enable": false 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "bdev_malloc_create", 00:18:25.481 "params": { 00:18:25.481 "name": "malloc0", 00:18:25.481 "num_blocks": 8192, 00:18:25.481 "block_size": 4096, 00:18:25.481 "physical_block_size": 4096, 00:18:25.481 "uuid": "06cb0ce5-aa9d-4140-a2dd-02417527afde", 00:18:25.481 "optimal_io_boundary": 0, 00:18:25.481 "md_size": 0, 00:18:25.481 "dif_type": 0, 00:18:25.481 "dif_is_head_of_md": false, 00:18:25.481 "dif_pi_format": 0 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "bdev_wait_for_examine" 00:18:25.481 } 00:18:25.481 ] 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "subsystem": "nbd", 00:18:25.481 "config": [] 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "subsystem": "scheduler", 00:18:25.481 "config": [ 00:18:25.481 { 00:18:25.481 "method": "framework_set_scheduler", 00:18:25.481 "params": { 00:18:25.481 "name": "static" 00:18:25.481 } 00:18:25.481 } 00:18:25.481 ] 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "subsystem": "nvmf", 00:18:25.481 "config": [ 00:18:25.481 { 00:18:25.481 "method": "nvmf_set_config", 00:18:25.481 "params": { 00:18:25.481 "discovery_filter": "match_any", 00:18:25.481 "admin_cmd_passthru": { 00:18:25.481 "identify_ctrlr": false 00:18:25.481 }, 00:18:25.481 "dhchap_digests": [ 00:18:25.481 "sha256", 00:18:25.481 "sha384", 00:18:25.481 "sha512" 00:18:25.481 ], 00:18:25.481 "dhchap_dhgroups": [ 00:18:25.481 "null", 00:18:25.481 "ffdhe2048", 00:18:25.481 "ffdhe3072", 00:18:25.481 "ffdhe4096", 00:18:25.481 "ffdhe6144", 00:18:25.481 "ffdhe8192" 00:18:25.481 ] 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "nvmf_set_max_subsystems", 00:18:25.481 "params": { 00:18:25.481 "max_subsystems": 1024 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "nvmf_set_crdt", 00:18:25.481 "params": { 00:18:25.481 "crdt1": 0, 00:18:25.481 "crdt2": 0, 00:18:25.481 "crdt3": 0 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "nvmf_create_transport", 00:18:25.481 "params": { 00:18:25.481 "trtype": "TCP", 00:18:25.481 "max_queue_depth": 128, 00:18:25.481 "max_io_qpairs_per_ctrlr": 127, 00:18:25.481 "in_capsule_data_size": 4096, 00:18:25.481 "max_io_size": 131072, 00:18:25.481 "io_unit_size": 131072, 00:18:25.481 "max_aq_depth": 128, 00:18:25.481 "num_shared_buffers": 511, 00:18:25.481 "buf_cache_size": 4294967295, 00:18:25.481 "dif_insert_or_strip": false, 00:18:25.481 "zcopy": false, 00:18:25.481 "c2h_success": false, 00:18:25.481 "sock_priority": 0, 00:18:25.481 "abort_timeout_sec": 1, 00:18:25.481 "ack_timeout": 0, 00:18:25.481 "data_wr_pool_size": 0 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "nvmf_create_subsystem", 00:18:25.481 "params": { 00:18:25.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.481 "allow_any_host": false, 00:18:25.481 "serial_number": "SPDK00000000000001", 00:18:25.481 "model_number": "SPDK bdev Controller", 00:18:25.481 "max_namespaces": 10, 00:18:25.481 "min_cntlid": 1, 00:18:25.481 "max_cntlid": 65519, 00:18:25.481 "ana_reporting": false 00:18:25.481 } 00:18:25.481 }, 00:18:25.481 { 00:18:25.481 "method": "nvmf_subsystem_add_host", 00:18:25.482 "params": { 00:18:25.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.482 "host": "nqn.2016-06.io.spdk:host1", 00:18:25.482 "psk": "key0" 00:18:25.482 } 00:18:25.482 }, 00:18:25.482 { 00:18:25.482 "method": "nvmf_subsystem_add_ns", 00:18:25.482 "params": { 00:18:25.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.482 "namespace": { 00:18:25.482 "nsid": 1, 00:18:25.482 "bdev_name": "malloc0", 00:18:25.482 "nguid": "06CB0CE5AA9D4140A2DD02417527AFDE", 00:18:25.482 "uuid": "06cb0ce5-aa9d-4140-a2dd-02417527afde", 00:18:25.482 "no_auto_visible": false 00:18:25.482 } 00:18:25.482 } 00:18:25.482 }, 00:18:25.482 { 00:18:25.482 "method": "nvmf_subsystem_add_listener", 00:18:25.482 "params": { 00:18:25.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.482 "listen_address": { 00:18:25.482 "trtype": "TCP", 00:18:25.482 "adrfam": "IPv4", 00:18:25.482 "traddr": "10.0.0.2", 00:18:25.482 "trsvcid": "4420" 00:18:25.482 }, 00:18:25.482 "secure_channel": true 00:18:25.482 } 00:18:25.482 } 00:18:25.482 ] 00:18:25.482 } 00:18:25.482 ] 00:18:25.482 }' 00:18:25.482 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:25.741 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:25.741 "subsystems": [ 00:18:25.741 { 00:18:25.741 "subsystem": "keyring", 00:18:25.741 "config": [ 00:18:25.741 { 00:18:25.741 "method": "keyring_file_add_key", 00:18:25.741 "params": { 00:18:25.741 "name": "key0", 00:18:25.741 "path": "/tmp/tmp.kXyiquXCpP" 00:18:25.741 } 00:18:25.741 } 00:18:25.741 ] 00:18:25.741 }, 00:18:25.741 { 00:18:25.741 "subsystem": "iobuf", 00:18:25.741 "config": [ 00:18:25.741 { 00:18:25.741 "method": "iobuf_set_options", 00:18:25.741 "params": { 00:18:25.741 "small_pool_count": 8192, 00:18:25.741 "large_pool_count": 1024, 00:18:25.741 "small_bufsize": 8192, 00:18:25.741 "large_bufsize": 135168, 00:18:25.741 "enable_numa": false 00:18:25.741 } 00:18:25.741 } 00:18:25.741 ] 00:18:25.741 }, 00:18:25.741 { 00:18:25.741 "subsystem": "sock", 00:18:25.741 "config": [ 00:18:25.741 { 00:18:25.741 "method": "sock_set_default_impl", 00:18:25.741 "params": { 00:18:25.741 "impl_name": "posix" 00:18:25.741 } 00:18:25.741 }, 00:18:25.741 { 00:18:25.741 "method": "sock_impl_set_options", 00:18:25.741 "params": { 00:18:25.741 "impl_name": "ssl", 00:18:25.741 "recv_buf_size": 4096, 00:18:25.741 "send_buf_size": 4096, 00:18:25.741 "enable_recv_pipe": true, 00:18:25.741 "enable_quickack": false, 00:18:25.741 "enable_placement_id": 0, 00:18:25.741 "enable_zerocopy_send_server": true, 00:18:25.741 "enable_zerocopy_send_client": false, 00:18:25.741 "zerocopy_threshold": 0, 00:18:25.741 "tls_version": 0, 00:18:25.741 "enable_ktls": false 00:18:25.741 } 00:18:25.741 }, 00:18:25.741 { 00:18:25.741 "method": "sock_impl_set_options", 00:18:25.741 "params": { 00:18:25.741 "impl_name": "posix", 00:18:25.741 "recv_buf_size": 2097152, 00:18:25.741 "send_buf_size": 2097152, 00:18:25.741 "enable_recv_pipe": true, 00:18:25.741 "enable_quickack": false, 00:18:25.741 "enable_placement_id": 0, 00:18:25.741 "enable_zerocopy_send_server": true, 00:18:25.741 "enable_zerocopy_send_client": false, 00:18:25.741 "zerocopy_threshold": 0, 00:18:25.741 "tls_version": 0, 00:18:25.741 "enable_ktls": false 00:18:25.741 } 00:18:25.741 } 00:18:25.741 ] 00:18:25.741 }, 00:18:25.741 { 00:18:25.741 "subsystem": "vmd", 00:18:25.741 "config": [] 00:18:25.741 }, 00:18:25.741 { 00:18:25.741 "subsystem": "accel", 00:18:25.741 "config": [ 00:18:25.741 { 00:18:25.741 "method": "accel_set_options", 00:18:25.741 "params": { 00:18:25.741 "small_cache_size": 128, 00:18:25.741 "large_cache_size": 16, 00:18:25.741 "task_count": 2048, 00:18:25.741 "sequence_count": 2048, 00:18:25.741 "buf_count": 2048 00:18:25.741 } 00:18:25.741 } 00:18:25.741 ] 00:18:25.741 }, 00:18:25.741 { 00:18:25.741 "subsystem": "bdev", 00:18:25.741 "config": [ 00:18:25.741 { 00:18:25.741 "method": "bdev_set_options", 00:18:25.741 "params": { 00:18:25.741 "bdev_io_pool_size": 65535, 00:18:25.741 "bdev_io_cache_size": 256, 00:18:25.741 "bdev_auto_examine": true, 00:18:25.741 "iobuf_small_cache_size": 128, 00:18:25.742 "iobuf_large_cache_size": 16 00:18:25.742 } 00:18:25.742 }, 00:18:25.742 { 00:18:25.742 "method": "bdev_raid_set_options", 00:18:25.742 "params": { 00:18:25.742 "process_window_size_kb": 1024, 00:18:25.742 "process_max_bandwidth_mb_sec": 0 00:18:25.742 } 00:18:25.742 }, 00:18:25.742 { 00:18:25.742 "method": "bdev_iscsi_set_options", 00:18:25.742 "params": { 00:18:25.742 "timeout_sec": 30 00:18:25.742 } 00:18:25.742 }, 00:18:25.742 { 00:18:25.742 "method": "bdev_nvme_set_options", 00:18:25.742 "params": { 00:18:25.742 "action_on_timeout": "none", 00:18:25.742 "timeout_us": 0, 00:18:25.742 "timeout_admin_us": 0, 00:18:25.742 "keep_alive_timeout_ms": 10000, 00:18:25.742 "arbitration_burst": 0, 00:18:25.742 "low_priority_weight": 0, 00:18:25.742 "medium_priority_weight": 0, 00:18:25.742 "high_priority_weight": 0, 00:18:25.742 "nvme_adminq_poll_period_us": 10000, 00:18:25.742 "nvme_ioq_poll_period_us": 0, 00:18:25.742 "io_queue_requests": 512, 00:18:25.742 "delay_cmd_submit": true, 00:18:25.742 "transport_retry_count": 4, 00:18:25.742 "bdev_retry_count": 3, 00:18:25.742 "transport_ack_timeout": 0, 00:18:25.742 "ctrlr_loss_timeout_sec": 0, 00:18:25.742 "reconnect_delay_sec": 0, 00:18:25.742 "fast_io_fail_timeout_sec": 0, 00:18:25.742 "disable_auto_failback": false, 00:18:25.742 "generate_uuids": false, 00:18:25.742 "transport_tos": 0, 00:18:25.742 "nvme_error_stat": false, 00:18:25.742 "rdma_srq_size": 0, 00:18:25.742 "io_path_stat": false, 00:18:25.742 "allow_accel_sequence": false, 00:18:25.742 "rdma_max_cq_size": 0, 00:18:25.742 "rdma_cm_event_timeout_ms": 0, 00:18:25.742 "dhchap_digests": [ 00:18:25.742 "sha256", 00:18:25.742 "sha384", 00:18:25.742 "sha512" 00:18:25.742 ], 00:18:25.742 "dhchap_dhgroups": [ 00:18:25.742 "null", 00:18:25.742 "ffdhe2048", 00:18:25.742 "ffdhe3072", 00:18:25.742 "ffdhe4096", 00:18:25.742 "ffdhe6144", 00:18:25.742 "ffdhe8192" 00:18:25.742 ], 00:18:25.742 "rdma_umr_per_io": false 00:18:25.742 } 00:18:25.742 }, 00:18:25.742 { 00:18:25.742 "method": "bdev_nvme_attach_controller", 00:18:25.742 "params": { 00:18:25.742 "name": "TLSTEST", 00:18:25.742 "trtype": "TCP", 00:18:25.742 "adrfam": "IPv4", 00:18:25.742 "traddr": "10.0.0.2", 00:18:25.742 "trsvcid": "4420", 00:18:25.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.742 "prchk_reftag": false, 00:18:25.742 "prchk_guard": false, 00:18:25.742 "ctrlr_loss_timeout_sec": 0, 00:18:25.742 "reconnect_delay_sec": 0, 00:18:25.742 "fast_io_fail_timeout_sec": 0, 00:18:25.742 "psk": "key0", 00:18:25.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.742 "hdgst": false, 00:18:25.742 "ddgst": false, 00:18:25.742 "multipath": "multipath" 00:18:25.742 } 00:18:25.742 }, 00:18:25.742 { 00:18:25.742 "method": "bdev_nvme_set_hotplug", 00:18:25.742 "params": { 00:18:25.742 "period_us": 100000, 00:18:25.742 "enable": false 00:18:25.742 } 00:18:25.742 }, 00:18:25.742 { 00:18:25.742 "method": "bdev_wait_for_examine" 00:18:25.742 } 00:18:25.742 ] 00:18:25.742 }, 00:18:25.742 { 00:18:25.742 "subsystem": "nbd", 00:18:25.742 "config": [] 00:18:25.742 } 00:18:25.742 ] 00:18:25.742 }' 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1536236 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1536236 ']' 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1536236 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536236 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536236' 00:18:25.742 killing process with pid 1536236 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1536236 00:18:25.742 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.742 00:18:25.742 Latency(us) 00:18:25.742 [2024-12-12T09:31:59.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.742 [2024-12-12T09:31:59.765Z] =================================================================================================================== 00:18:25.742 [2024-12-12T09:31:59.765Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.742 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1536236 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1535984 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1535984 ']' 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1535984 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535984 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535984' 00:18:26.001 killing process with pid 1535984 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1535984 00:18:26.001 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1535984 00:18:26.261 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:26.261 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:26.261 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.261 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:26.261 "subsystems": [ 00:18:26.261 { 00:18:26.261 "subsystem": "keyring", 00:18:26.261 "config": [ 00:18:26.261 { 00:18:26.261 "method": "keyring_file_add_key", 00:18:26.261 "params": { 00:18:26.261 "name": "key0", 00:18:26.261 "path": "/tmp/tmp.kXyiquXCpP" 00:18:26.261 } 00:18:26.261 } 00:18:26.261 ] 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "subsystem": "iobuf", 00:18:26.261 "config": [ 00:18:26.261 { 00:18:26.261 "method": "iobuf_set_options", 00:18:26.261 "params": { 00:18:26.261 "small_pool_count": 8192, 00:18:26.261 "large_pool_count": 1024, 00:18:26.261 "small_bufsize": 8192, 00:18:26.261 "large_bufsize": 135168, 00:18:26.261 "enable_numa": false 00:18:26.261 } 00:18:26.261 } 00:18:26.261 ] 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "subsystem": "sock", 00:18:26.261 "config": [ 00:18:26.261 { 00:18:26.261 "method": "sock_set_default_impl", 00:18:26.261 "params": { 00:18:26.261 "impl_name": "posix" 00:18:26.261 } 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "method": "sock_impl_set_options", 00:18:26.261 "params": { 00:18:26.261 "impl_name": "ssl", 00:18:26.261 "recv_buf_size": 4096, 00:18:26.261 "send_buf_size": 4096, 00:18:26.261 "enable_recv_pipe": true, 00:18:26.261 "enable_quickack": false, 00:18:26.261 "enable_placement_id": 0, 00:18:26.261 "enable_zerocopy_send_server": true, 00:18:26.261 "enable_zerocopy_send_client": false, 00:18:26.261 "zerocopy_threshold": 0, 00:18:26.261 "tls_version": 0, 00:18:26.261 "enable_ktls": false 00:18:26.261 } 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "method": "sock_impl_set_options", 00:18:26.261 "params": { 00:18:26.261 "impl_name": "posix", 00:18:26.261 "recv_buf_size": 2097152, 00:18:26.261 "send_buf_size": 2097152, 00:18:26.261 "enable_recv_pipe": true, 00:18:26.261 "enable_quickack": false, 00:18:26.261 "enable_placement_id": 0, 00:18:26.261 "enable_zerocopy_send_server": true, 00:18:26.261 "enable_zerocopy_send_client": false, 00:18:26.261 "zerocopy_threshold": 0, 00:18:26.261 "tls_version": 0, 00:18:26.261 "enable_ktls": false 00:18:26.261 } 00:18:26.261 } 00:18:26.261 ] 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "subsystem": "vmd", 00:18:26.261 "config": [] 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "subsystem": "accel", 00:18:26.261 "config": [ 00:18:26.261 { 00:18:26.261 "method": "accel_set_options", 00:18:26.261 "params": { 00:18:26.261 "small_cache_size": 128, 00:18:26.261 "large_cache_size": 16, 00:18:26.261 "task_count": 2048, 00:18:26.261 "sequence_count": 2048, 00:18:26.261 "buf_count": 2048 00:18:26.261 } 00:18:26.261 } 00:18:26.261 ] 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "subsystem": "bdev", 00:18:26.261 "config": [ 00:18:26.261 { 00:18:26.261 "method": "bdev_set_options", 00:18:26.261 "params": { 00:18:26.261 "bdev_io_pool_size": 65535, 00:18:26.261 "bdev_io_cache_size": 256, 00:18:26.261 "bdev_auto_examine": true, 00:18:26.261 "iobuf_small_cache_size": 128, 00:18:26.261 "iobuf_large_cache_size": 16 00:18:26.261 } 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "method": "bdev_raid_set_options", 00:18:26.261 "params": { 00:18:26.261 "process_window_size_kb": 1024, 00:18:26.261 "process_max_bandwidth_mb_sec": 0 00:18:26.261 } 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "method": "bdev_iscsi_set_options", 00:18:26.261 "params": { 00:18:26.261 "timeout_sec": 30 00:18:26.261 } 00:18:26.261 }, 00:18:26.261 { 00:18:26.261 "method": "bdev_nvme_set_options", 00:18:26.261 "params": { 00:18:26.261 "action_on_timeout": "none", 00:18:26.261 "timeout_us": 0, 00:18:26.261 "timeout_admin_us": 0, 00:18:26.261 "keep_alive_timeout_ms": 10000, 00:18:26.261 "arbitration_burst": 0, 00:18:26.262 "low_priority_weight": 0, 00:18:26.262 "medium_priority_weight": 0, 00:18:26.262 "high_priority_weight": 0, 00:18:26.262 "nvme_adminq_poll_period_us": 10000, 00:18:26.262 "nvme_ioq_poll_period_us": 0, 00:18:26.262 "io_queue_requests": 0, 00:18:26.262 "delay_cmd_submit": true, 00:18:26.262 "transport_retry_count": 4, 00:18:26.262 "bdev_retry_count": 3, 00:18:26.262 "transport_ack_timeout": 0, 00:18:26.262 "ctrlr_loss_timeout_sec": 0, 00:18:26.262 "reconnect_delay_sec": 0, 00:18:26.262 "fast_io_fail_timeout_sec": 0, 00:18:26.262 "disable_auto_failback": false, 00:18:26.262 "generate_uuids": false, 00:18:26.262 "transport_tos": 0, 00:18:26.262 "nvme_error_stat": false, 00:18:26.262 "rdma_srq_size": 0, 00:18:26.262 "io_path_stat": false, 00:18:26.262 "allow_accel_sequence": false, 00:18:26.262 "rdma_max_cq_size": 0, 00:18:26.262 "rdma_cm_event_timeout_ms": 0, 00:18:26.262 "dhchap_digests": [ 00:18:26.262 "sha256", 00:18:26.262 "sha384", 00:18:26.262 "sha512" 00:18:26.262 ], 00:18:26.262 "dhchap_dhgroups": [ 00:18:26.262 "null", 00:18:26.262 "ffdhe2048", 00:18:26.262 "ffdhe3072", 00:18:26.262 "ffdhe4096", 00:18:26.262 "ffdhe6144", 00:18:26.262 "ffdhe8192" 00:18:26.262 ], 00:18:26.262 "rdma_umr_per_io": false 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "bdev_nvme_set_hotplug", 00:18:26.262 "params": { 00:18:26.262 "period_us": 100000, 00:18:26.262 "enable": false 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "bdev_malloc_create", 00:18:26.262 "params": { 00:18:26.262 "name": "malloc0", 00:18:26.262 "num_blocks": 8192, 00:18:26.262 "block_size": 4096, 00:18:26.262 "physical_block_size": 4096, 00:18:26.262 "uuid": "06cb0ce5-aa9d-4140-a2dd-02417527afde", 00:18:26.262 "optimal_io_boundary": 0, 00:18:26.262 "md_size": 0, 00:18:26.262 "dif_type": 0, 00:18:26.262 "dif_is_head_of_md": false, 00:18:26.262 "dif_pi_format": 0 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "bdev_wait_for_examine" 00:18:26.262 } 00:18:26.262 ] 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "subsystem": "nbd", 00:18:26.262 "config": [] 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "subsystem": "scheduler", 00:18:26.262 "config": [ 00:18:26.262 { 00:18:26.262 "method": "framework_set_scheduler", 00:18:26.262 "params": { 00:18:26.262 "name": "static" 00:18:26.262 } 00:18:26.262 } 00:18:26.262 ] 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "subsystem": "nvmf", 00:18:26.262 "config": [ 00:18:26.262 { 00:18:26.262 "method": "nvmf_set_config", 00:18:26.262 "params": { 00:18:26.262 "discovery_filter": "match_any", 00:18:26.262 "admin_cmd_passthru": { 00:18:26.262 "identify_ctrlr": false 00:18:26.262 }, 00:18:26.262 "dhchap_digests": [ 00:18:26.262 "sha256", 00:18:26.262 "sha384", 00:18:26.262 "sha512" 00:18:26.262 ], 00:18:26.262 "dhchap_dhgroups": [ 00:18:26.262 "null", 00:18:26.262 "ffdhe2048", 00:18:26.262 "ffdhe3072", 00:18:26.262 "ffdhe4096", 00:18:26.262 "ffdhe6144", 00:18:26.262 "ffdhe8192" 00:18:26.262 ] 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "nvmf_set_max_subsystems", 00:18:26.262 "params": { 00:18:26.262 "max_subsystems": 1024 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "nvmf_set_crdt", 00:18:26.262 "params": { 00:18:26.262 "crdt1": 0, 00:18:26.262 "crdt2": 0, 00:18:26.262 "crdt3": 0 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "nvmf_create_transport", 00:18:26.262 "params": { 00:18:26.262 "trtype": "TCP", 00:18:26.262 "max_queue_depth": 128, 00:18:26.262 "max_io_qpairs_per_ctrlr": 127, 00:18:26.262 "in_capsule_data_size": 4096, 00:18:26.262 "max_io_size": 131072, 00:18:26.262 "io_unit_size": 131072, 00:18:26.262 "max_aq_depth": 128, 00:18:26.262 "num_shared_buffers": 511, 00:18:26.262 "buf_cache_size": 4294967295, 00:18:26.262 "dif_insert_or_strip": false, 00:18:26.262 "zcopy": false, 00:18:26.262 "c2h_success": false, 00:18:26.262 "sock_priority": 0, 00:18:26.262 "abort_timeout_sec": 1, 00:18:26.262 "ack_timeout": 0, 00:18:26.262 "data_wr_pool_size": 0 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "nvmf_create_subsystem", 00:18:26.262 "params": { 00:18:26.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.262 "allow_any_host": false, 00:18:26.262 "serial_number": "SPDK00000000000001", 00:18:26.262 "model_number": "SPDK bdev Controller", 00:18:26.262 "max_namespaces": 10, 00:18:26.262 "min_cntlid": 1, 00:18:26.262 "max_cntlid": 65519, 00:18:26.262 "ana_reporting": false 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "nvmf_subsystem_add_host", 00:18:26.262 "params": { 00:18:26.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.262 "host": "nqn.2016-06.io.spdk:host1", 00:18:26.262 "psk": "key0" 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "nvmf_subsystem_add_ns", 00:18:26.262 "params": { 00:18:26.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.262 "namespace": { 00:18:26.262 "nsid": 1, 00:18:26.262 "bdev_name": "malloc0", 00:18:26.262 "nguid": "06CB0CE5AA9D4140A2DD02417527AFDE", 00:18:26.262 "uuid": "06cb0ce5-aa9d-4140-a2dd-02417527afde", 00:18:26.262 "no_auto_visible": false 00:18:26.262 } 00:18:26.262 } 00:18:26.262 }, 00:18:26.262 { 00:18:26.262 "method": "nvmf_subsystem_add_listener", 00:18:26.262 "params": { 00:18:26.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.262 "listen_address": { 00:18:26.262 "trtype": "TCP", 00:18:26.262 "adrfam": "IPv4", 00:18:26.262 "traddr": "10.0.0.2", 00:18:26.262 "trsvcid": "4420" 00:18:26.262 }, 00:18:26.262 "secure_channel": true 00:18:26.262 } 00:18:26.262 } 00:18:26.262 ] 00:18:26.262 } 00:18:26.262 ] 00:18:26.262 }' 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1536483 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1536483 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1536483 ']' 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.262 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.262 [2024-12-12 10:32:00.139648] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:26.262 [2024-12-12 10:32:00.139693] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.262 [2024-12-12 10:32:00.218668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.262 [2024-12-12 10:32:00.257897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.262 [2024-12-12 10:32:00.257931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.262 [2024-12-12 10:32:00.257938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.262 [2024-12-12 10:32:00.257944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.262 [2024-12-12 10:32:00.257950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.262 [2024-12-12 10:32:00.258466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.522 [2024-12-12 10:32:00.473546] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.522 [2024-12-12 10:32:00.505584] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.522 [2024-12-12 10:32:00.505808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.096 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.096 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.096 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:27.096 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.096 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.096 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.096 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1536720 00:18:27.096 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1536720 /var/tmp/bdevperf.sock 00:18:27.096 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1536720 ']' 00:18:27.096 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.096 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:27.096 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.096 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.096 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:27.096 "subsystems": [ 00:18:27.096 { 00:18:27.096 "subsystem": "keyring", 00:18:27.096 "config": [ 00:18:27.096 { 00:18:27.096 "method": "keyring_file_add_key", 00:18:27.096 "params": { 00:18:27.096 "name": "key0", 00:18:27.096 "path": "/tmp/tmp.kXyiquXCpP" 00:18:27.096 } 00:18:27.096 } 00:18:27.096 ] 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "subsystem": "iobuf", 00:18:27.096 "config": [ 00:18:27.096 { 00:18:27.096 "method": "iobuf_set_options", 00:18:27.096 "params": { 00:18:27.096 "small_pool_count": 8192, 00:18:27.096 "large_pool_count": 1024, 00:18:27.096 "small_bufsize": 8192, 00:18:27.096 "large_bufsize": 135168, 00:18:27.096 "enable_numa": false 00:18:27.096 } 00:18:27.096 } 00:18:27.096 ] 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "subsystem": "sock", 00:18:27.096 "config": [ 00:18:27.096 { 00:18:27.096 "method": "sock_set_default_impl", 00:18:27.096 "params": { 00:18:27.096 "impl_name": "posix" 00:18:27.096 } 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "method": "sock_impl_set_options", 00:18:27.096 "params": { 00:18:27.096 "impl_name": "ssl", 00:18:27.096 "recv_buf_size": 4096, 00:18:27.096 "send_buf_size": 4096, 00:18:27.096 "enable_recv_pipe": true, 00:18:27.096 "enable_quickack": false, 00:18:27.096 "enable_placement_id": 0, 00:18:27.096 "enable_zerocopy_send_server": true, 00:18:27.096 "enable_zerocopy_send_client": false, 00:18:27.096 "zerocopy_threshold": 0, 00:18:27.096 "tls_version": 0, 00:18:27.096 "enable_ktls": false 00:18:27.096 } 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "method": "sock_impl_set_options", 00:18:27.096 "params": { 00:18:27.096 "impl_name": "posix", 00:18:27.096 "recv_buf_size": 2097152, 00:18:27.096 "send_buf_size": 2097152, 00:18:27.096 "enable_recv_pipe": true, 00:18:27.096 "enable_quickack": false, 00:18:27.096 "enable_placement_id": 0, 00:18:27.096 "enable_zerocopy_send_server": true, 00:18:27.096 "enable_zerocopy_send_client": false, 00:18:27.096 "zerocopy_threshold": 0, 00:18:27.096 "tls_version": 0, 00:18:27.096 "enable_ktls": false 00:18:27.096 } 00:18:27.096 } 00:18:27.096 ] 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "subsystem": "vmd", 00:18:27.096 "config": [] 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "subsystem": "accel", 00:18:27.096 "config": [ 00:18:27.096 { 00:18:27.096 "method": "accel_set_options", 00:18:27.096 "params": { 00:18:27.096 "small_cache_size": 128, 00:18:27.096 "large_cache_size": 16, 00:18:27.096 "task_count": 2048, 00:18:27.096 "sequence_count": 2048, 00:18:27.096 "buf_count": 2048 00:18:27.096 } 00:18:27.096 } 00:18:27.096 ] 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "subsystem": "bdev", 00:18:27.096 "config": [ 00:18:27.096 { 00:18:27.096 "method": "bdev_set_options", 00:18:27.096 "params": { 00:18:27.096 "bdev_io_pool_size": 65535, 00:18:27.096 "bdev_io_cache_size": 256, 00:18:27.096 "bdev_auto_examine": true, 00:18:27.096 "iobuf_small_cache_size": 128, 00:18:27.096 "iobuf_large_cache_size": 16 00:18:27.096 } 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "method": "bdev_raid_set_options", 00:18:27.096 "params": { 00:18:27.096 "process_window_size_kb": 1024, 00:18:27.096 "process_max_bandwidth_mb_sec": 0 00:18:27.096 } 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "method": "bdev_iscsi_set_options", 00:18:27.096 "params": { 00:18:27.096 "timeout_sec": 30 00:18:27.096 } 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "method": "bdev_nvme_set_options", 00:18:27.096 "params": { 00:18:27.096 "action_on_timeout": "none", 00:18:27.096 "timeout_us": 0, 00:18:27.096 "timeout_admin_us": 0, 00:18:27.096 "keep_alive_timeout_ms": 10000, 00:18:27.096 "arbitration_burst": 0, 00:18:27.096 "low_priority_weight": 0, 00:18:27.096 "medium_priority_weight": 0, 00:18:27.096 "high_priority_weight": 0, 00:18:27.096 "nvme_adminq_poll_period_us": 10000, 00:18:27.096 "nvme_ioq_poll_period_us": 0, 00:18:27.096 "io_queue_requests": 512, 00:18:27.096 "delay_cmd_submit": true, 00:18:27.096 "transport_retry_count": 4, 00:18:27.096 "bdev_retry_count": 3, 00:18:27.096 "transport_ack_timeout": 0, 00:18:27.096 "ctrlr_loss_timeout_sec": 0, 00:18:27.096 "reconnect_delay_sec": 0, 00:18:27.096 "fast_io_fail_timeout_sec": 0, 00:18:27.096 "disable_auto_failback": false, 00:18:27.096 "generate_uuids": false, 00:18:27.096 "transport_tos": 0, 00:18:27.096 "nvme_error_stat": false, 00:18:27.096 "rdma_srq_size": 0, 00:18:27.096 "io_path_stat": false, 00:18:27.096 "allow_accel_sequence": false, 00:18:27.096 "rdma_max_cq_size": 0, 00:18:27.096 "rdma_cm_event_timeout_ms": 0, 00:18:27.096 "dhchap_digests": [ 00:18:27.096 "sha256", 00:18:27.096 "sha384", 00:18:27.096 "sha512" 00:18:27.096 ], 00:18:27.096 "dhchap_dhgroups": [ 00:18:27.096 "null", 00:18:27.096 "ffdhe2048", 00:18:27.096 "ffdhe3072", 00:18:27.096 "ffdhe4096", 00:18:27.096 "ffdhe6144", 00:18:27.096 "ffdhe8192" 00:18:27.096 ], 00:18:27.096 "rdma_umr_per_io": false 00:18:27.096 } 00:18:27.096 }, 00:18:27.096 { 00:18:27.096 "method": "bdev_nvme_attach_controller", 00:18:27.096 "params": { 00:18:27.096 "name": "TLSTEST", 00:18:27.096 "trtype": "TCP", 00:18:27.096 "adrfam": "IPv4", 00:18:27.096 "traddr": "10.0.0.2", 00:18:27.097 "trsvcid": "4420", 00:18:27.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.097 "prchk_reftag": false, 00:18:27.097 "prchk_guard": false, 00:18:27.097 "ctrlr_loss_timeout_sec": 0, 00:18:27.097 "reconnect_delay_sec": 0, 00:18:27.097 "fast_io_fail_timeout_sec": 0, 00:18:27.097 "psk": "key0", 00:18:27.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.097 "hdgst": false, 00:18:27.097 "ddgst": false, 00:18:27.097 "multipath": "multipath" 00:18:27.097 } 00:18:27.097 }, 00:18:27.097 { 00:18:27.097 "method": "bdev_nvme_set_hotplug", 00:18:27.097 "params": { 00:18:27.097 "period_us": 100000, 00:18:27.097 "enable": false 00:18:27.097 } 00:18:27.097 }, 00:18:27.097 { 00:18:27.097 "method": "bdev_wait_for_examine" 00:18:27.097 } 00:18:27.097 ] 00:18:27.097 }, 00:18:27.097 { 00:18:27.097 "subsystem": "nbd", 00:18:27.097 "config": [] 00:18:27.097 } 00:18:27.097 ] 00:18:27.097 }' 00:18:27.097 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.097 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.097 [2024-12-12 10:32:01.060800] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:27.097 [2024-12-12 10:32:01.060847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536720 ] 00:18:27.356 [2024-12-12 10:32:01.130942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.356 [2024-12-12 10:32:01.170303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.356 [2024-12-12 10:32:01.324176] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.923 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.923 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.923 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:28.181 Running I/O for 10 seconds... 00:18:30.168 4932.00 IOPS, 19.27 MiB/s [2024-12-12T09:32:05.127Z] 5300.00 IOPS, 20.70 MiB/s [2024-12-12T09:32:06.062Z] 5370.33 IOPS, 20.98 MiB/s [2024-12-12T09:32:07.438Z] 5433.00 IOPS, 21.22 MiB/s [2024-12-12T09:32:08.005Z] 5458.20 IOPS, 21.32 MiB/s [2024-12-12T09:32:09.382Z] 5392.83 IOPS, 21.07 MiB/s [2024-12-12T09:32:10.319Z] 5416.43 IOPS, 21.16 MiB/s [2024-12-12T09:32:11.255Z] 5431.00 IOPS, 21.21 MiB/s [2024-12-12T09:32:12.192Z] 5442.78 IOPS, 21.26 MiB/s [2024-12-12T09:32:12.192Z] 5456.40 IOPS, 21.31 MiB/s 00:18:38.169 Latency(us) 00:18:38.169 [2024-12-12T09:32:12.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.169 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:38.169 Verification LBA range: start 0x0 length 0x2000 00:18:38.169 TLSTESTn1 : 10.01 5462.33 21.34 0.00 0.00 23399.10 5086.84 66409.81 00:18:38.169 [2024-12-12T09:32:12.192Z] =================================================================================================================== 00:18:38.169 [2024-12-12T09:32:12.192Z] Total : 5462.33 21.34 0.00 0.00 23399.10 5086.84 66409.81 00:18:38.169 { 00:18:38.169 "results": [ 00:18:38.169 { 00:18:38.169 "job": "TLSTESTn1", 00:18:38.169 "core_mask": "0x4", 00:18:38.170 "workload": "verify", 00:18:38.170 "status": "finished", 00:18:38.170 "verify_range": { 00:18:38.170 "start": 0, 00:18:38.170 "length": 8192 00:18:38.170 }, 00:18:38.170 "queue_depth": 128, 00:18:38.170 "io_size": 4096, 00:18:38.170 "runtime": 10.01239, 00:18:38.170 "iops": 5462.332170440824, 00:18:38.170 "mibps": 21.337235040784467, 00:18:38.170 "io_failed": 0, 00:18:38.170 "io_timeout": 0, 00:18:38.170 "avg_latency_us": 23399.096268751455, 00:18:38.170 "min_latency_us": 5086.8419047619045, 00:18:38.170 "max_latency_us": 66409.81333333334 00:18:38.170 } 00:18:38.170 ], 00:18:38.170 "core_count": 1 00:18:38.170 } 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1536720 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1536720 ']' 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1536720 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536720 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536720' 00:18:38.170 killing process with pid 1536720 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1536720 00:18:38.170 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.170 00:18:38.170 Latency(us) 00:18:38.170 [2024-12-12T09:32:12.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.170 [2024-12-12T09:32:12.193Z] =================================================================================================================== 00:18:38.170 [2024-12-12T09:32:12.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.170 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1536720 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1536483 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1536483 ']' 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1536483 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536483 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536483' 00:18:38.429 killing process with pid 1536483 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1536483 00:18:38.429 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1536483 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1538525 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1538525 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1538525 ']' 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.688 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.688 [2024-12-12 10:32:12.531865] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:38.688 [2024-12-12 10:32:12.531913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.688 [2024-12-12 10:32:12.610337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.688 [2024-12-12 10:32:12.647147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.688 [2024-12-12 10:32:12.647182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.688 [2024-12-12 10:32:12.647189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.688 [2024-12-12 10:32:12.647195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.688 [2024-12-12 10:32:12.647200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.688 [2024-12-12 10:32:12.647705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.947 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.947 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:38.947 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:38.947 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.947 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.947 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.947 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.kXyiquXCpP 00:18:38.947 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kXyiquXCpP 00:18:38.947 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:38.947 [2024-12-12 10:32:12.959674] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.206 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:39.206 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:39.465 [2024-12-12 10:32:13.336648] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.465 [2024-12-12 10:32:13.336864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.465 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:39.724 malloc0 00:18:39.724 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:39.724 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kXyiquXCpP 00:18:39.983 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1538772 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1538772 /var/tmp/bdevperf.sock 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1538772 ']' 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.242 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.242 [2024-12-12 10:32:14.136119] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:40.242 [2024-12-12 10:32:14.136166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538772 ] 00:18:40.242 [2024-12-12 10:32:14.212097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.242 [2024-12-12 10:32:14.252212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.178 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.178 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.178 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kXyiquXCpP 00:18:41.178 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:41.436 [2024-12-12 10:32:15.317635] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.437 nvme0n1 00:18:41.437 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.694 Running I/O for 1 seconds... 00:18:42.629 5559.00 IOPS, 21.71 MiB/s 00:18:42.629 Latency(us) 00:18:42.629 [2024-12-12T09:32:16.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.629 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:42.629 Verification LBA range: start 0x0 length 0x2000 00:18:42.629 nvme0n1 : 1.01 5609.73 21.91 0.00 0.00 22666.19 5274.09 21221.18 00:18:42.629 [2024-12-12T09:32:16.652Z] =================================================================================================================== 00:18:42.629 [2024-12-12T09:32:16.652Z] Total : 5609.73 21.91 0.00 0.00 22666.19 5274.09 21221.18 00:18:42.629 { 00:18:42.629 "results": [ 00:18:42.629 { 00:18:42.629 "job": "nvme0n1", 00:18:42.629 "core_mask": "0x2", 00:18:42.629 "workload": "verify", 00:18:42.629 "status": "finished", 00:18:42.629 "verify_range": { 00:18:42.629 "start": 0, 00:18:42.629 "length": 8192 00:18:42.629 }, 00:18:42.629 "queue_depth": 128, 00:18:42.629 "io_size": 4096, 00:18:42.629 "runtime": 1.013774, 00:18:42.629 "iops": 5609.731557526628, 00:18:42.629 "mibps": 21.913013896588392, 00:18:42.629 "io_failed": 0, 00:18:42.629 "io_timeout": 0, 00:18:42.629 "avg_latency_us": 22666.19468495399, 00:18:42.629 "min_latency_us": 5274.087619047619, 00:18:42.629 "max_latency_us": 21221.180952380953 00:18:42.629 } 00:18:42.629 ], 00:18:42.629 "core_count": 1 00:18:42.629 } 00:18:42.629 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1538772 00:18:42.629 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1538772 ']' 00:18:42.629 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1538772 00:18:42.629 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.629 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.629 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1538772 00:18:42.629 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:42.629 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:42.629 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1538772' 00:18:42.629 killing process with pid 1538772 00:18:42.630 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1538772 00:18:42.630 Received shutdown signal, test time was about 1.000000 seconds 00:18:42.630 00:18:42.630 Latency(us) 00:18:42.630 [2024-12-12T09:32:16.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.630 [2024-12-12T09:32:16.653Z] =================================================================================================================== 00:18:42.630 [2024-12-12T09:32:16.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.630 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1538772 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1538525 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1538525 ']' 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1538525 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1538525 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1538525' 00:18:42.889 killing process with pid 1538525 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1538525 00:18:42.889 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1538525 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1539235 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1539235 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1539235 ']' 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.148 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.148 [2024-12-12 10:32:17.018740] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:43.148 [2024-12-12 10:32:17.018784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.148 [2024-12-12 10:32:17.094908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.148 [2024-12-12 10:32:17.134771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.148 [2024-12-12 10:32:17.134805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.148 [2024-12-12 10:32:17.134813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.148 [2024-12-12 10:32:17.134819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.148 [2024-12-12 10:32:17.134824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.148 [2024-12-12 10:32:17.135350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.407 [2024-12-12 10:32:17.272465] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.407 malloc0 00:18:43.407 [2024-12-12 10:32:17.300586] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.407 [2024-12-12 10:32:17.300804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1539339 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1539339 /var/tmp/bdevperf.sock 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1539339 ']' 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.407 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.407 [2024-12-12 10:32:17.375602] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:43.407 [2024-12-12 10:32:17.375648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539339 ] 00:18:43.666 [2024-12-12 10:32:17.450656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.666 [2024-12-12 10:32:17.491519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.666 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.666 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.666 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kXyiquXCpP 00:18:43.925 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:44.184 [2024-12-12 10:32:17.952674] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.184 nvme0n1 00:18:44.184 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.184 Running I/O for 1 seconds... 00:18:45.379 4816.00 IOPS, 18.81 MiB/s 00:18:45.379 Latency(us) 00:18:45.379 [2024-12-12T09:32:19.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.379 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:45.379 Verification LBA range: start 0x0 length 0x2000 00:18:45.379 nvme0n1 : 1.02 4862.28 18.99 0.00 0.00 26128.78 5960.66 41443.72 00:18:45.379 [2024-12-12T09:32:19.402Z] =================================================================================================================== 00:18:45.379 [2024-12-12T09:32:19.402Z] Total : 4862.28 18.99 0.00 0.00 26128.78 5960.66 41443.72 00:18:45.379 { 00:18:45.379 "results": [ 00:18:45.379 { 00:18:45.379 "job": "nvme0n1", 00:18:45.379 "core_mask": "0x2", 00:18:45.379 "workload": "verify", 00:18:45.379 "status": "finished", 00:18:45.379 "verify_range": { 00:18:45.379 "start": 0, 00:18:45.379 "length": 8192 00:18:45.379 }, 00:18:45.379 "queue_depth": 128, 00:18:45.380 "io_size": 4096, 00:18:45.380 "runtime": 1.016806, 00:18:45.380 "iops": 4862.28444757407, 00:18:45.380 "mibps": 18.993298623336212, 00:18:45.380 "io_failed": 0, 00:18:45.380 "io_timeout": 0, 00:18:45.380 "avg_latency_us": 26128.777882570503, 00:18:45.380 "min_latency_us": 5960.655238095238, 00:18:45.380 "max_latency_us": 41443.718095238095 00:18:45.380 } 00:18:45.380 ], 00:18:45.380 "core_count": 1 00:18:45.380 } 00:18:45.380 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:45.380 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.380 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.380 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.380 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:45.380 "subsystems": [ 00:18:45.380 { 00:18:45.380 "subsystem": "keyring", 00:18:45.380 "config": [ 00:18:45.380 { 00:18:45.380 "method": "keyring_file_add_key", 00:18:45.380 "params": { 00:18:45.380 "name": "key0", 00:18:45.380 "path": "/tmp/tmp.kXyiquXCpP" 00:18:45.380 } 00:18:45.380 } 00:18:45.380 ] 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "subsystem": "iobuf", 00:18:45.380 "config": [ 00:18:45.380 { 00:18:45.380 "method": "iobuf_set_options", 00:18:45.380 "params": { 00:18:45.380 "small_pool_count": 8192, 00:18:45.380 "large_pool_count": 1024, 00:18:45.380 "small_bufsize": 8192, 00:18:45.380 "large_bufsize": 135168, 00:18:45.380 "enable_numa": false 00:18:45.380 } 00:18:45.380 } 00:18:45.380 ] 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "subsystem": "sock", 00:18:45.380 "config": [ 00:18:45.380 { 00:18:45.380 "method": "sock_set_default_impl", 00:18:45.380 "params": { 00:18:45.380 "impl_name": "posix" 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "sock_impl_set_options", 00:18:45.380 "params": { 00:18:45.380 "impl_name": "ssl", 00:18:45.380 "recv_buf_size": 4096, 00:18:45.380 "send_buf_size": 4096, 00:18:45.380 "enable_recv_pipe": true, 00:18:45.380 "enable_quickack": false, 00:18:45.380 "enable_placement_id": 0, 00:18:45.380 "enable_zerocopy_send_server": true, 00:18:45.380 "enable_zerocopy_send_client": false, 00:18:45.380 "zerocopy_threshold": 0, 00:18:45.380 "tls_version": 0, 00:18:45.380 "enable_ktls": false 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "sock_impl_set_options", 00:18:45.380 "params": { 00:18:45.380 "impl_name": "posix", 00:18:45.380 "recv_buf_size": 2097152, 00:18:45.380 "send_buf_size": 2097152, 00:18:45.380 "enable_recv_pipe": true, 00:18:45.380 "enable_quickack": false, 00:18:45.380 "enable_placement_id": 0, 00:18:45.380 "enable_zerocopy_send_server": true, 00:18:45.380 "enable_zerocopy_send_client": false, 00:18:45.380 "zerocopy_threshold": 0, 00:18:45.380 "tls_version": 0, 00:18:45.380 "enable_ktls": false 00:18:45.380 } 00:18:45.380 } 00:18:45.380 ] 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "subsystem": "vmd", 00:18:45.380 "config": [] 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "subsystem": "accel", 00:18:45.380 "config": [ 00:18:45.380 { 00:18:45.380 "method": "accel_set_options", 00:18:45.380 "params": { 00:18:45.380 "small_cache_size": 128, 00:18:45.380 "large_cache_size": 16, 00:18:45.380 "task_count": 2048, 00:18:45.380 "sequence_count": 2048, 00:18:45.380 "buf_count": 2048 00:18:45.380 } 00:18:45.380 } 00:18:45.380 ] 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "subsystem": "bdev", 00:18:45.380 "config": [ 00:18:45.380 { 00:18:45.380 "method": "bdev_set_options", 00:18:45.380 "params": { 00:18:45.380 "bdev_io_pool_size": 65535, 00:18:45.380 "bdev_io_cache_size": 256, 00:18:45.380 "bdev_auto_examine": true, 00:18:45.380 "iobuf_small_cache_size": 128, 00:18:45.380 "iobuf_large_cache_size": 16 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "bdev_raid_set_options", 00:18:45.380 "params": { 00:18:45.380 "process_window_size_kb": 1024, 00:18:45.380 "process_max_bandwidth_mb_sec": 0 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "bdev_iscsi_set_options", 00:18:45.380 "params": { 00:18:45.380 "timeout_sec": 30 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "bdev_nvme_set_options", 00:18:45.380 "params": { 00:18:45.380 "action_on_timeout": "none", 00:18:45.380 "timeout_us": 0, 00:18:45.380 "timeout_admin_us": 0, 00:18:45.380 "keep_alive_timeout_ms": 10000, 00:18:45.380 "arbitration_burst": 0, 00:18:45.380 "low_priority_weight": 0, 00:18:45.380 "medium_priority_weight": 0, 00:18:45.380 "high_priority_weight": 0, 00:18:45.380 "nvme_adminq_poll_period_us": 10000, 00:18:45.380 "nvme_ioq_poll_period_us": 0, 00:18:45.380 "io_queue_requests": 0, 00:18:45.380 "delay_cmd_submit": true, 00:18:45.380 "transport_retry_count": 4, 00:18:45.380 "bdev_retry_count": 3, 00:18:45.380 "transport_ack_timeout": 0, 00:18:45.380 "ctrlr_loss_timeout_sec": 0, 00:18:45.380 "reconnect_delay_sec": 0, 00:18:45.380 "fast_io_fail_timeout_sec": 0, 00:18:45.380 "disable_auto_failback": false, 00:18:45.380 "generate_uuids": false, 00:18:45.380 "transport_tos": 0, 00:18:45.380 "nvme_error_stat": false, 00:18:45.380 "rdma_srq_size": 0, 00:18:45.380 "io_path_stat": false, 00:18:45.380 "allow_accel_sequence": false, 00:18:45.380 "rdma_max_cq_size": 0, 00:18:45.380 "rdma_cm_event_timeout_ms": 0, 00:18:45.380 "dhchap_digests": [ 00:18:45.380 "sha256", 00:18:45.380 "sha384", 00:18:45.380 "sha512" 00:18:45.380 ], 00:18:45.380 "dhchap_dhgroups": [ 00:18:45.380 "null", 00:18:45.380 "ffdhe2048", 00:18:45.380 "ffdhe3072", 00:18:45.380 "ffdhe4096", 00:18:45.380 "ffdhe6144", 00:18:45.380 "ffdhe8192" 00:18:45.380 ], 00:18:45.380 "rdma_umr_per_io": false 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "bdev_nvme_set_hotplug", 00:18:45.380 "params": { 00:18:45.380 "period_us": 100000, 00:18:45.380 "enable": false 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "bdev_malloc_create", 00:18:45.380 "params": { 00:18:45.380 "name": "malloc0", 00:18:45.380 "num_blocks": 8192, 00:18:45.380 "block_size": 4096, 00:18:45.380 "physical_block_size": 4096, 00:18:45.380 "uuid": "79e8362b-5beb-4027-9305-9f532c966480", 00:18:45.380 "optimal_io_boundary": 0, 00:18:45.380 "md_size": 0, 00:18:45.380 "dif_type": 0, 00:18:45.380 "dif_is_head_of_md": false, 00:18:45.380 "dif_pi_format": 0 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "bdev_wait_for_examine" 00:18:45.380 } 00:18:45.380 ] 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "subsystem": "nbd", 00:18:45.380 "config": [] 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "subsystem": "scheduler", 00:18:45.380 "config": [ 00:18:45.380 { 00:18:45.380 "method": "framework_set_scheduler", 00:18:45.380 "params": { 00:18:45.380 "name": "static" 00:18:45.380 } 00:18:45.380 } 00:18:45.380 ] 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "subsystem": "nvmf", 00:18:45.380 "config": [ 00:18:45.380 { 00:18:45.380 "method": "nvmf_set_config", 00:18:45.380 "params": { 00:18:45.380 "discovery_filter": "match_any", 00:18:45.380 "admin_cmd_passthru": { 00:18:45.380 "identify_ctrlr": false 00:18:45.380 }, 00:18:45.380 "dhchap_digests": [ 00:18:45.380 "sha256", 00:18:45.380 "sha384", 00:18:45.380 "sha512" 00:18:45.380 ], 00:18:45.380 "dhchap_dhgroups": [ 00:18:45.380 "null", 00:18:45.380 "ffdhe2048", 00:18:45.380 "ffdhe3072", 00:18:45.380 "ffdhe4096", 00:18:45.380 "ffdhe6144", 00:18:45.380 "ffdhe8192" 00:18:45.380 ] 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "nvmf_set_max_subsystems", 00:18:45.380 "params": { 00:18:45.380 "max_subsystems": 1024 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "nvmf_set_crdt", 00:18:45.380 "params": { 00:18:45.380 "crdt1": 0, 00:18:45.380 "crdt2": 0, 00:18:45.380 "crdt3": 0 00:18:45.380 } 00:18:45.380 }, 00:18:45.380 { 00:18:45.380 "method": "nvmf_create_transport", 00:18:45.380 "params": { 00:18:45.380 "trtype": "TCP", 00:18:45.380 "max_queue_depth": 128, 00:18:45.380 "max_io_qpairs_per_ctrlr": 127, 00:18:45.380 "in_capsule_data_size": 4096, 00:18:45.380 "max_io_size": 131072, 00:18:45.380 "io_unit_size": 131072, 00:18:45.380 "max_aq_depth": 128, 00:18:45.380 "num_shared_buffers": 511, 00:18:45.380 "buf_cache_size": 4294967295, 00:18:45.381 "dif_insert_or_strip": false, 00:18:45.381 "zcopy": false, 00:18:45.381 "c2h_success": false, 00:18:45.381 "sock_priority": 0, 00:18:45.381 "abort_timeout_sec": 1, 00:18:45.381 "ack_timeout": 0, 00:18:45.381 "data_wr_pool_size": 0 00:18:45.381 } 00:18:45.381 }, 00:18:45.381 { 00:18:45.381 "method": "nvmf_create_subsystem", 00:18:45.381 "params": { 00:18:45.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.381 "allow_any_host": false, 00:18:45.381 "serial_number": "00000000000000000000", 00:18:45.381 "model_number": "SPDK bdev Controller", 00:18:45.381 "max_namespaces": 32, 00:18:45.381 "min_cntlid": 1, 00:18:45.381 "max_cntlid": 65519, 00:18:45.381 "ana_reporting": false 00:18:45.381 } 00:18:45.381 }, 00:18:45.381 { 00:18:45.381 "method": "nvmf_subsystem_add_host", 00:18:45.381 "params": { 00:18:45.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.381 "host": "nqn.2016-06.io.spdk:host1", 00:18:45.381 "psk": "key0" 00:18:45.381 } 00:18:45.381 }, 00:18:45.381 { 00:18:45.381 "method": "nvmf_subsystem_add_ns", 00:18:45.381 "params": { 00:18:45.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.381 "namespace": { 00:18:45.381 "nsid": 1, 00:18:45.381 "bdev_name": "malloc0", 00:18:45.381 "nguid": "79E8362B5BEB402793059F532C966480", 00:18:45.381 "uuid": "79e8362b-5beb-4027-9305-9f532c966480", 00:18:45.381 "no_auto_visible": false 00:18:45.381 } 00:18:45.381 } 00:18:45.381 }, 00:18:45.381 { 00:18:45.381 "method": "nvmf_subsystem_add_listener", 00:18:45.381 "params": { 00:18:45.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.381 "listen_address": { 00:18:45.381 "trtype": "TCP", 00:18:45.381 "adrfam": "IPv4", 00:18:45.381 "traddr": "10.0.0.2", 00:18:45.381 "trsvcid": "4420" 00:18:45.381 }, 00:18:45.381 "secure_channel": false, 00:18:45.381 "sock_impl": "ssl" 00:18:45.381 } 00:18:45.381 } 00:18:45.381 ] 00:18:45.381 } 00:18:45.381 ] 00:18:45.381 }' 00:18:45.381 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:45.641 "subsystems": [ 00:18:45.641 { 00:18:45.641 "subsystem": "keyring", 00:18:45.641 "config": [ 00:18:45.641 { 00:18:45.641 "method": "keyring_file_add_key", 00:18:45.641 "params": { 00:18:45.641 "name": "key0", 00:18:45.641 "path": "/tmp/tmp.kXyiquXCpP" 00:18:45.641 } 00:18:45.641 } 00:18:45.641 ] 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "subsystem": "iobuf", 00:18:45.641 "config": [ 00:18:45.641 { 00:18:45.641 "method": "iobuf_set_options", 00:18:45.641 "params": { 00:18:45.641 "small_pool_count": 8192, 00:18:45.641 "large_pool_count": 1024, 00:18:45.641 "small_bufsize": 8192, 00:18:45.641 "large_bufsize": 135168, 00:18:45.641 "enable_numa": false 00:18:45.641 } 00:18:45.641 } 00:18:45.641 ] 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "subsystem": "sock", 00:18:45.641 "config": [ 00:18:45.641 { 00:18:45.641 "method": "sock_set_default_impl", 00:18:45.641 "params": { 00:18:45.641 "impl_name": "posix" 00:18:45.641 } 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "method": "sock_impl_set_options", 00:18:45.641 "params": { 00:18:45.641 "impl_name": "ssl", 00:18:45.641 "recv_buf_size": 4096, 00:18:45.641 "send_buf_size": 4096, 00:18:45.641 "enable_recv_pipe": true, 00:18:45.641 "enable_quickack": false, 00:18:45.641 "enable_placement_id": 0, 00:18:45.641 "enable_zerocopy_send_server": true, 00:18:45.641 "enable_zerocopy_send_client": false, 00:18:45.641 "zerocopy_threshold": 0, 00:18:45.641 "tls_version": 0, 00:18:45.641 "enable_ktls": false 00:18:45.641 } 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "method": "sock_impl_set_options", 00:18:45.641 "params": { 00:18:45.641 "impl_name": "posix", 00:18:45.641 "recv_buf_size": 2097152, 00:18:45.641 "send_buf_size": 2097152, 00:18:45.641 "enable_recv_pipe": true, 00:18:45.641 "enable_quickack": false, 00:18:45.641 "enable_placement_id": 0, 00:18:45.641 "enable_zerocopy_send_server": true, 00:18:45.641 "enable_zerocopy_send_client": false, 00:18:45.641 "zerocopy_threshold": 0, 00:18:45.641 "tls_version": 0, 00:18:45.641 "enable_ktls": false 00:18:45.641 } 00:18:45.641 } 00:18:45.641 ] 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "subsystem": "vmd", 00:18:45.641 "config": [] 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "subsystem": "accel", 00:18:45.641 "config": [ 00:18:45.641 { 00:18:45.641 "method": "accel_set_options", 00:18:45.641 "params": { 00:18:45.641 "small_cache_size": 128, 00:18:45.641 "large_cache_size": 16, 00:18:45.641 "task_count": 2048, 00:18:45.641 "sequence_count": 2048, 00:18:45.641 "buf_count": 2048 00:18:45.641 } 00:18:45.641 } 00:18:45.641 ] 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "subsystem": "bdev", 00:18:45.641 "config": [ 00:18:45.641 { 00:18:45.641 "method": "bdev_set_options", 00:18:45.641 "params": { 00:18:45.641 "bdev_io_pool_size": 65535, 00:18:45.641 "bdev_io_cache_size": 256, 00:18:45.641 "bdev_auto_examine": true, 00:18:45.641 "iobuf_small_cache_size": 128, 00:18:45.641 "iobuf_large_cache_size": 16 00:18:45.641 } 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "method": "bdev_raid_set_options", 00:18:45.641 "params": { 00:18:45.641 "process_window_size_kb": 1024, 00:18:45.641 "process_max_bandwidth_mb_sec": 0 00:18:45.641 } 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "method": "bdev_iscsi_set_options", 00:18:45.641 "params": { 00:18:45.641 "timeout_sec": 30 00:18:45.641 } 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "method": "bdev_nvme_set_options", 00:18:45.641 "params": { 00:18:45.641 "action_on_timeout": "none", 00:18:45.641 "timeout_us": 0, 00:18:45.641 "timeout_admin_us": 0, 00:18:45.641 "keep_alive_timeout_ms": 10000, 00:18:45.641 "arbitration_burst": 0, 00:18:45.641 "low_priority_weight": 0, 00:18:45.641 "medium_priority_weight": 0, 00:18:45.641 "high_priority_weight": 0, 00:18:45.641 "nvme_adminq_poll_period_us": 10000, 00:18:45.641 "nvme_ioq_poll_period_us": 0, 00:18:45.641 "io_queue_requests": 512, 00:18:45.641 "delay_cmd_submit": true, 00:18:45.641 "transport_retry_count": 4, 00:18:45.641 "bdev_retry_count": 3, 00:18:45.641 "transport_ack_timeout": 0, 00:18:45.641 "ctrlr_loss_timeout_sec": 0, 00:18:45.641 "reconnect_delay_sec": 0, 00:18:45.641 "fast_io_fail_timeout_sec": 0, 00:18:45.641 "disable_auto_failback": false, 00:18:45.641 "generate_uuids": false, 00:18:45.641 "transport_tos": 0, 00:18:45.641 "nvme_error_stat": false, 00:18:45.641 "rdma_srq_size": 0, 00:18:45.641 "io_path_stat": false, 00:18:45.641 "allow_accel_sequence": false, 00:18:45.641 "rdma_max_cq_size": 0, 00:18:45.641 "rdma_cm_event_timeout_ms": 0, 00:18:45.641 "dhchap_digests": [ 00:18:45.641 "sha256", 00:18:45.641 "sha384", 00:18:45.641 "sha512" 00:18:45.641 ], 00:18:45.641 "dhchap_dhgroups": [ 00:18:45.641 "null", 00:18:45.641 "ffdhe2048", 00:18:45.641 "ffdhe3072", 00:18:45.641 "ffdhe4096", 00:18:45.641 "ffdhe6144", 00:18:45.641 "ffdhe8192" 00:18:45.641 ], 00:18:45.641 "rdma_umr_per_io": false 00:18:45.641 } 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "method": "bdev_nvme_attach_controller", 00:18:45.641 "params": { 00:18:45.641 "name": "nvme0", 00:18:45.641 "trtype": "TCP", 00:18:45.641 "adrfam": "IPv4", 00:18:45.641 "traddr": "10.0.0.2", 00:18:45.641 "trsvcid": "4420", 00:18:45.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.641 "prchk_reftag": false, 00:18:45.641 "prchk_guard": false, 00:18:45.641 "ctrlr_loss_timeout_sec": 0, 00:18:45.641 "reconnect_delay_sec": 0, 00:18:45.641 "fast_io_fail_timeout_sec": 0, 00:18:45.641 "psk": "key0", 00:18:45.641 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.641 "hdgst": false, 00:18:45.641 "ddgst": false, 00:18:45.641 "multipath": "multipath" 00:18:45.641 } 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "method": "bdev_nvme_set_hotplug", 00:18:45.641 "params": { 00:18:45.641 "period_us": 100000, 00:18:45.641 "enable": false 00:18:45.641 } 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "method": "bdev_enable_histogram", 00:18:45.641 "params": { 00:18:45.641 "name": "nvme0n1", 00:18:45.641 "enable": true 00:18:45.641 } 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "method": "bdev_wait_for_examine" 00:18:45.641 } 00:18:45.641 ] 00:18:45.641 }, 00:18:45.641 { 00:18:45.641 "subsystem": "nbd", 00:18:45.641 "config": [] 00:18:45.641 } 00:18:45.641 ] 00:18:45.641 }' 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1539339 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1539339 ']' 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1539339 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539339 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539339' 00:18:45.641 killing process with pid 1539339 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1539339 00:18:45.641 Received shutdown signal, test time was about 1.000000 seconds 00:18:45.641 00:18:45.641 Latency(us) 00:18:45.641 [2024-12-12T09:32:19.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.641 [2024-12-12T09:32:19.664Z] =================================================================================================================== 00:18:45.641 [2024-12-12T09:32:19.664Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.641 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1539339 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1539235 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1539235 ']' 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1539235 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539235 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539235' 00:18:45.901 killing process with pid 1539235 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1539235 00:18:45.901 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1539235 00:18:46.160 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:46.160 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:46.160 "subsystems": [ 00:18:46.160 { 00:18:46.160 "subsystem": "keyring", 00:18:46.160 "config": [ 00:18:46.160 { 00:18:46.160 "method": "keyring_file_add_key", 00:18:46.160 "params": { 00:18:46.160 "name": "key0", 00:18:46.160 "path": "/tmp/tmp.kXyiquXCpP" 00:18:46.160 } 00:18:46.160 } 00:18:46.160 ] 00:18:46.160 }, 00:18:46.160 { 00:18:46.160 "subsystem": "iobuf", 00:18:46.160 "config": [ 00:18:46.160 { 00:18:46.160 "method": "iobuf_set_options", 00:18:46.160 "params": { 00:18:46.160 "small_pool_count": 8192, 00:18:46.160 "large_pool_count": 1024, 00:18:46.160 "small_bufsize": 8192, 00:18:46.160 "large_bufsize": 135168, 00:18:46.160 "enable_numa": false 00:18:46.160 } 00:18:46.160 } 00:18:46.160 ] 00:18:46.160 }, 00:18:46.160 { 00:18:46.160 "subsystem": "sock", 00:18:46.160 "config": [ 00:18:46.160 { 00:18:46.160 "method": "sock_set_default_impl", 00:18:46.160 "params": { 00:18:46.160 "impl_name": "posix" 00:18:46.160 } 00:18:46.160 }, 00:18:46.160 { 00:18:46.160 "method": "sock_impl_set_options", 00:18:46.160 "params": { 00:18:46.160 "impl_name": "ssl", 00:18:46.160 "recv_buf_size": 4096, 00:18:46.160 "send_buf_size": 4096, 00:18:46.160 "enable_recv_pipe": true, 00:18:46.160 "enable_quickack": false, 00:18:46.160 "enable_placement_id": 0, 00:18:46.160 "enable_zerocopy_send_server": true, 00:18:46.160 "enable_zerocopy_send_client": false, 00:18:46.160 "zerocopy_threshold": 0, 00:18:46.160 "tls_version": 0, 00:18:46.160 "enable_ktls": false 00:18:46.160 } 00:18:46.160 }, 00:18:46.160 { 00:18:46.160 "method": "sock_impl_set_options", 00:18:46.160 "params": { 00:18:46.160 "impl_name": "posix", 00:18:46.160 "recv_buf_size": 2097152, 00:18:46.160 "send_buf_size": 2097152, 00:18:46.160 "enable_recv_pipe": true, 00:18:46.160 "enable_quickack": false, 00:18:46.160 "enable_placement_id": 0, 00:18:46.160 "enable_zerocopy_send_server": true, 00:18:46.160 "enable_zerocopy_send_client": false, 00:18:46.160 "zerocopy_threshold": 0, 00:18:46.160 "tls_version": 0, 00:18:46.160 "enable_ktls": false 00:18:46.160 } 00:18:46.160 } 00:18:46.160 ] 00:18:46.160 }, 00:18:46.160 { 00:18:46.160 "subsystem": "vmd", 00:18:46.160 "config": [] 00:18:46.160 }, 00:18:46.160 { 00:18:46.160 "subsystem": "accel", 00:18:46.160 "config": [ 00:18:46.160 { 00:18:46.160 "method": "accel_set_options", 00:18:46.160 "params": { 00:18:46.160 "small_cache_size": 128, 00:18:46.160 "large_cache_size": 16, 00:18:46.160 "task_count": 2048, 00:18:46.160 "sequence_count": 2048, 00:18:46.160 "buf_count": 2048 00:18:46.160 } 00:18:46.160 } 00:18:46.160 ] 00:18:46.160 }, 00:18:46.160 { 00:18:46.160 "subsystem": "bdev", 00:18:46.160 "config": [ 00:18:46.160 { 00:18:46.160 "method": "bdev_set_options", 00:18:46.160 "params": { 00:18:46.160 "bdev_io_pool_size": 65535, 00:18:46.160 "bdev_io_cache_size": 256, 00:18:46.160 "bdev_auto_examine": true, 00:18:46.161 "iobuf_small_cache_size": 128, 00:18:46.161 "iobuf_large_cache_size": 16 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "bdev_raid_set_options", 00:18:46.161 "params": { 00:18:46.161 "process_window_size_kb": 1024, 00:18:46.161 "process_max_bandwidth_mb_sec": 0 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "bdev_iscsi_set_options", 00:18:46.161 "params": { 00:18:46.161 "timeout_sec": 30 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "bdev_nvme_set_options", 00:18:46.161 "params": { 00:18:46.161 "action_on_timeout": "none", 00:18:46.161 "timeout_us": 0, 00:18:46.161 "timeout_admin_us": 0, 00:18:46.161 "keep_alive_timeout_ms": 10000, 00:18:46.161 "arbitration_burst": 0, 00:18:46.161 "low_priority_weight": 0, 00:18:46.161 "medium_priority_weight": 0, 00:18:46.161 "high_priority_weight": 0, 00:18:46.161 "nvme_adminq_poll_period_us": 10000, 00:18:46.161 "nvme_ioq_poll_period_us": 0, 00:18:46.161 "io_queue_requests": 0, 00:18:46.161 "delay_cmd_submit": true, 00:18:46.161 "transport_retry_count": 4, 00:18:46.161 "bdev_retry_count": 3, 00:18:46.161 "transport_ack_timeout": 0, 00:18:46.161 "ctrlr_loss_timeout_sec": 0, 00:18:46.161 "reconnect_delay_sec": 0, 00:18:46.161 "fast_io_fail_timeout_sec": 0, 00:18:46.161 "disable_auto_failback": false, 00:18:46.161 "generate_uuids": false, 00:18:46.161 "transport_tos": 0, 00:18:46.161 "nvme_error_stat": false, 00:18:46.161 "rdma_srq_size": 0, 00:18:46.161 "io_path_stat": false, 00:18:46.161 "allow_accel_sequence": false, 00:18:46.161 "rdma_max_cq_size": 0, 00:18:46.161 "rdma_cm_event_timeout_ms": 0, 00:18:46.161 "dhchap_digests": [ 00:18:46.161 "sha256", 00:18:46.161 "sha384", 00:18:46.161 "sha512" 00:18:46.161 ], 00:18:46.161 "dhchap_dhgroups": [ 00:18:46.161 "null", 00:18:46.161 "ffdhe2048", 00:18:46.161 "ffdhe3072", 00:18:46.161 "ffdhe4096", 00:18:46.161 "ffdhe6144", 00:18:46.161 "ffdhe8192" 00:18:46.161 ], 00:18:46.161 "rdma_umr_per_io": false 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "bdev_nvme_set_hotplug", 00:18:46.161 "params": { 00:18:46.161 "period_us": 100000, 00:18:46.161 "enable": false 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "bdev_malloc_create", 00:18:46.161 "params": { 00:18:46.161 "name": "malloc0", 00:18:46.161 "num_blocks": 8192, 00:18:46.161 "block_size": 4096, 00:18:46.161 "physical_block_size": 4096, 00:18:46.161 "uuid": "79e8362b-5beb-4027-9305-9f532c966480", 00:18:46.161 "optimal_io_boundary": 0, 00:18:46.161 "md_size": 0, 00:18:46.161 "dif_type": 0, 00:18:46.161 "dif_is_head_of_md": false, 00:18:46.161 "dif_pi_format": 0 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "bdev_wait_for_examine" 00:18:46.161 } 00:18:46.161 ] 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "subsystem": "nbd", 00:18:46.161 "config": [] 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "subsystem": "scheduler", 00:18:46.161 "config": [ 00:18:46.161 { 00:18:46.161 "method": "framework_set_scheduler", 00:18:46.161 "params": { 00:18:46.161 "name": "static" 00:18:46.161 } 00:18:46.161 } 00:18:46.161 ] 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "subsystem": "nvmf", 00:18:46.161 "config": [ 00:18:46.161 { 00:18:46.161 "method": "nvmf_set_config", 00:18:46.161 "params": { 00:18:46.161 "discovery_filter": "match_any", 00:18:46.161 "admin_cmd_passthru": { 00:18:46.161 "identify_ctrlr": false 00:18:46.161 }, 00:18:46.161 "dhchap_digests": [ 00:18:46.161 "sha256", 00:18:46.161 "sha384", 00:18:46.161 "sha512" 00:18:46.161 ], 00:18:46.161 "dhchap_dhgroups": [ 00:18:46.161 "null", 00:18:46.161 "ffdhe2048", 00:18:46.161 "ffdhe3072", 00:18:46.161 "ffdhe4096", 00:18:46.161 "ffdhe6144", 00:18:46.161 "ffdhe8192" 00:18:46.161 ] 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "nvmf_set_max_subsystems", 00:18:46.161 "params": { 00:18:46.161 "max_subsystems": 1024 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "nvmf_set_crdt", 00:18:46.161 "params": { 00:18:46.161 "crdt1": 0, 00:18:46.161 "crdt2": 0, 00:18:46.161 "crdt3": 0 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "nvmf_create_transport", 00:18:46.161 "params": { 00:18:46.161 "trtype": "TCP", 00:18:46.161 "max_queue_depth": 128, 00:18:46.161 "max_io_qpairs_per_ctrlr": 127, 00:18:46.161 "in_capsule_data_size": 4096, 00:18:46.161 "max_io_size": 131072, 00:18:46.161 "io_unit_size": 131072, 00:18:46.161 "max_aq_depth": 128, 00:18:46.161 "num_shared_buffers": 511, 00:18:46.161 "buf_cache_size": 4294967295, 00:18:46.161 "dif_insert_or_strip": false, 00:18:46.161 "zcopy": false, 00:18:46.161 "c2h_success": false, 00:18:46.161 "sock_priority": 0, 00:18:46.161 "abort_timeout_sec": 1, 00:18:46.161 "ack_timeout": 0, 00:18:46.161 "data_wr_pool_size": 0 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "nvmf_create_subsystem", 00:18:46.161 "params": { 00:18:46.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.161 "allow_any_host": false, 00:18:46.161 "serial_number": "00000000000000000000", 00:18:46.161 "model_number": "SPDK bdev Controller", 00:18:46.161 "max_namespaces": 32, 00:18:46.161 "min_cntlid": 1, 00:18:46.161 "max_cntlid": 65519, 00:18:46.161 "ana_reporting": false 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "nvmf_subsystem_add_host", 00:18:46.161 "params": { 00:18:46.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.161 "host": "nqn.2016-06.io.spdk:host1", 00:18:46.161 "psk": "key0" 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "nvmf_subsystem_add_ns", 00:18:46.161 "params": { 00:18:46.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.161 "namespace": { 00:18:46.161 "nsid": 1, 00:18:46.161 "bdev_name": "malloc0", 00:18:46.161 "nguid": "79E8362B5BEB402793059F532C966480", 00:18:46.161 "uuid": "79e8362b-5beb-4027-9305-9f532c966480", 00:18:46.161 "no_auto_visible": false 00:18:46.161 } 00:18:46.161 } 00:18:46.161 }, 00:18:46.161 { 00:18:46.161 "method": "nvmf_subsystem_add_listener", 00:18:46.161 "params": { 00:18:46.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.161 "listen_address": { 00:18:46.161 "trtype": "TCP", 00:18:46.161 "adrfam": "IPv4", 00:18:46.161 "traddr": "10.0.0.2", 00:18:46.161 "trsvcid": "4420" 00:18:46.161 }, 00:18:46.161 "secure_channel": false, 00:18:46.161 "sock_impl": "ssl" 00:18:46.161 } 00:18:46.161 } 00:18:46.161 ] 00:18:46.161 } 00:18:46.161 ] 00:18:46.161 }' 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1539726 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1539726 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1539726 ']' 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.161 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.161 [2024-12-12 10:32:20.022483] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:46.161 [2024-12-12 10:32:20.022534] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.161 [2024-12-12 10:32:20.101143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.161 [2024-12-12 10:32:20.141618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.161 [2024-12-12 10:32:20.141652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.161 [2024-12-12 10:32:20.141662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.161 [2024-12-12 10:32:20.141669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.161 [2024-12-12 10:32:20.141674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.161 [2024-12-12 10:32:20.142202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.420 [2024-12-12 10:32:20.356578] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.420 [2024-12-12 10:32:20.388615] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.420 [2024-12-12 10:32:20.388824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1539959 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1539959 /var/tmp/bdevperf.sock 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1539959 ']' 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.988 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:46.988 "subsystems": [ 00:18:46.988 { 00:18:46.988 "subsystem": "keyring", 00:18:46.988 "config": [ 00:18:46.988 { 00:18:46.988 "method": "keyring_file_add_key", 00:18:46.988 "params": { 00:18:46.988 "name": "key0", 00:18:46.988 "path": "/tmp/tmp.kXyiquXCpP" 00:18:46.988 } 00:18:46.988 } 00:18:46.988 ] 00:18:46.988 }, 00:18:46.988 { 00:18:46.988 "subsystem": "iobuf", 00:18:46.988 "config": [ 00:18:46.988 { 00:18:46.988 "method": "iobuf_set_options", 00:18:46.988 "params": { 00:18:46.988 "small_pool_count": 8192, 00:18:46.988 "large_pool_count": 1024, 00:18:46.988 "small_bufsize": 8192, 00:18:46.988 "large_bufsize": 135168, 00:18:46.988 "enable_numa": false 00:18:46.988 } 00:18:46.988 } 00:18:46.988 ] 00:18:46.988 }, 00:18:46.988 { 00:18:46.988 "subsystem": "sock", 00:18:46.988 "config": [ 00:18:46.988 { 00:18:46.988 "method": "sock_set_default_impl", 00:18:46.988 "params": { 00:18:46.988 "impl_name": "posix" 00:18:46.988 } 00:18:46.988 }, 00:18:46.988 { 00:18:46.988 "method": "sock_impl_set_options", 00:18:46.988 "params": { 00:18:46.988 "impl_name": "ssl", 00:18:46.988 "recv_buf_size": 4096, 00:18:46.988 "send_buf_size": 4096, 00:18:46.988 "enable_recv_pipe": true, 00:18:46.988 "enable_quickack": false, 00:18:46.988 "enable_placement_id": 0, 00:18:46.988 "enable_zerocopy_send_server": true, 00:18:46.988 "enable_zerocopy_send_client": false, 00:18:46.988 "zerocopy_threshold": 0, 00:18:46.988 "tls_version": 0, 00:18:46.988 "enable_ktls": false 00:18:46.988 } 00:18:46.988 }, 00:18:46.988 { 00:18:46.988 "method": "sock_impl_set_options", 00:18:46.988 "params": { 00:18:46.988 "impl_name": "posix", 00:18:46.988 "recv_buf_size": 2097152, 00:18:46.988 "send_buf_size": 2097152, 00:18:46.988 "enable_recv_pipe": true, 00:18:46.988 "enable_quickack": false, 00:18:46.988 "enable_placement_id": 0, 00:18:46.988 "enable_zerocopy_send_server": true, 00:18:46.988 "enable_zerocopy_send_client": false, 00:18:46.988 "zerocopy_threshold": 0, 00:18:46.988 "tls_version": 0, 00:18:46.988 "enable_ktls": false 00:18:46.988 } 00:18:46.988 } 00:18:46.988 ] 00:18:46.988 }, 00:18:46.988 { 00:18:46.988 "subsystem": "vmd", 00:18:46.988 "config": [] 00:18:46.988 }, 00:18:46.988 { 00:18:46.988 "subsystem": "accel", 00:18:46.988 "config": [ 00:18:46.988 { 00:18:46.988 "method": "accel_set_options", 00:18:46.988 "params": { 00:18:46.988 "small_cache_size": 128, 00:18:46.988 "large_cache_size": 16, 00:18:46.988 "task_count": 2048, 00:18:46.988 "sequence_count": 2048, 00:18:46.988 "buf_count": 2048 00:18:46.988 } 00:18:46.988 } 00:18:46.988 ] 00:18:46.988 }, 00:18:46.988 { 00:18:46.988 "subsystem": "bdev", 00:18:46.988 "config": [ 00:18:46.988 { 00:18:46.988 "method": "bdev_set_options", 00:18:46.988 "params": { 00:18:46.988 "bdev_io_pool_size": 65535, 00:18:46.988 "bdev_io_cache_size": 256, 00:18:46.988 "bdev_auto_examine": true, 00:18:46.988 "iobuf_small_cache_size": 128, 00:18:46.988 "iobuf_large_cache_size": 16 00:18:46.988 } 00:18:46.988 }, 00:18:46.988 { 00:18:46.988 "method": "bdev_raid_set_options", 00:18:46.988 "params": { 00:18:46.988 "process_window_size_kb": 1024, 00:18:46.988 "process_max_bandwidth_mb_sec": 0 00:18:46.988 } 00:18:46.988 }, 00:18:46.988 { 00:18:46.988 "method": "bdev_iscsi_set_options", 00:18:46.988 "params": { 00:18:46.988 "timeout_sec": 30 00:18:46.989 } 00:18:46.989 }, 00:18:46.989 { 00:18:46.989 "method": "bdev_nvme_set_options", 00:18:46.989 "params": { 00:18:46.989 "action_on_timeout": "none", 00:18:46.989 "timeout_us": 0, 00:18:46.989 "timeout_admin_us": 0, 00:18:46.989 "keep_alive_timeout_ms": 10000, 00:18:46.989 "arbitration_burst": 0, 00:18:46.989 "low_priority_weight": 0, 00:18:46.989 "medium_priority_weight": 0, 00:18:46.989 "high_priority_weight": 0, 00:18:46.989 "nvme_adminq_poll_period_us": 10000, 00:18:46.989 "nvme_ioq_poll_period_us": 0, 00:18:46.989 "io_queue_requests": 512, 00:18:46.989 "delay_cmd_submit": true, 00:18:46.989 "transport_retry_count": 4, 00:18:46.989 "bdev_retry_count": 3, 00:18:46.989 "transport_ack_timeout": 0, 00:18:46.989 "ctrlr_loss_timeout_sec": 0, 00:18:46.989 "reconnect_delay_sec": 0, 00:18:46.989 "fast_io_fail_timeout_sec": 0, 00:18:46.989 "disable_auto_failback": false, 00:18:46.989 "generate_uuids": false, 00:18:46.989 "transport_tos": 0, 00:18:46.989 "nvme_error_stat": false, 00:18:46.989 "rdma_srq_size": 0, 00:18:46.989 "io_path_stat": false, 00:18:46.989 "allow_accel_sequence": false, 00:18:46.989 "rdma_max_cq_size": 0, 00:18:46.989 "rdma_cm_event_timeout_ms": 0, 00:18:46.989 "dhchap_digests": [ 00:18:46.989 "sha256", 00:18:46.989 "sha384", 00:18:46.989 "sha512" 00:18:46.989 ], 00:18:46.989 "dhchap_dhgroups": [ 00:18:46.989 "null", 00:18:46.989 "ffdhe2048", 00:18:46.989 "ffdhe3072", 00:18:46.989 "ffdhe4096", 00:18:46.989 "ffdhe6144", 00:18:46.989 "ffdhe8192" 00:18:46.989 ], 00:18:46.989 "rdma_umr_per_io": false 00:18:46.989 } 00:18:46.989 }, 00:18:46.989 { 00:18:46.989 "method": "bdev_nvme_attach_controller", 00:18:46.989 "params": { 00:18:46.989 "name": "nvme0", 00:18:46.989 "trtype": "TCP", 00:18:46.989 "adrfam": "IPv4", 00:18:46.989 "traddr": "10.0.0.2", 00:18:46.989 "trsvcid": "4420", 00:18:46.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.989 "prchk_reftag": false, 00:18:46.989 "prchk_guard": false, 00:18:46.989 "ctrlr_loss_timeout_sec": 0, 00:18:46.989 "reconnect_delay_sec": 0, 00:18:46.989 "fast_io_fail_timeout_sec": 0, 00:18:46.989 "psk": "key0", 00:18:46.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.989 "hdgst": false, 00:18:46.989 "ddgst": false, 00:18:46.989 "multipath": "multipath" 00:18:46.989 } 00:18:46.989 }, 00:18:46.989 { 00:18:46.989 "method": "bdev_nvme_set_hotplug", 00:18:46.989 "params": { 00:18:46.989 "period_us": 100000, 00:18:46.989 "enable": false 00:18:46.989 } 00:18:46.989 }, 00:18:46.989 { 00:18:46.989 "method": "bdev_enable_histogram", 00:18:46.989 "params": { 00:18:46.989 "name": "nvme0n1", 00:18:46.989 "enable": true 00:18:46.989 } 00:18:46.989 }, 00:18:46.989 { 00:18:46.989 "method": "bdev_wait_for_examine" 00:18:46.989 } 00:18:46.989 ] 00:18:46.989 }, 00:18:46.989 { 00:18:46.989 "subsystem": "nbd", 00:18:46.989 "config": [] 00:18:46.989 } 00:18:46.989 ] 00:18:46.989 }' 00:18:46.989 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.989 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.989 [2024-12-12 10:32:20.940658] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:46.989 [2024-12-12 10:32:20.940704] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539959 ] 00:18:47.248 [2024-12-12 10:32:21.013113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.248 [2024-12-12 10:32:21.052004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.248 [2024-12-12 10:32:21.205672] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.815 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.815 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.815 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:47.815 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:48.074 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.074 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.074 Running I/O for 1 seconds... 00:18:49.450 5284.00 IOPS, 20.64 MiB/s 00:18:49.450 Latency(us) 00:18:49.450 [2024-12-12T09:32:23.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.450 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:49.450 Verification LBA range: start 0x0 length 0x2000 00:18:49.450 nvme0n1 : 1.01 5346.17 20.88 0.00 0.00 23785.76 5367.71 35451.86 00:18:49.450 [2024-12-12T09:32:23.473Z] =================================================================================================================== 00:18:49.450 [2024-12-12T09:32:23.473Z] Total : 5346.17 20.88 0.00 0.00 23785.76 5367.71 35451.86 00:18:49.450 { 00:18:49.450 "results": [ 00:18:49.450 { 00:18:49.450 "job": "nvme0n1", 00:18:49.450 "core_mask": "0x2", 00:18:49.450 "workload": "verify", 00:18:49.450 "status": "finished", 00:18:49.450 "verify_range": { 00:18:49.450 "start": 0, 00:18:49.450 "length": 8192 00:18:49.450 }, 00:18:49.450 "queue_depth": 128, 00:18:49.450 "io_size": 4096, 00:18:49.450 "runtime": 1.012501, 00:18:49.450 "iops": 5346.167559340682, 00:18:49.450 "mibps": 20.883467028674538, 00:18:49.450 "io_failed": 0, 00:18:49.450 "io_timeout": 0, 00:18:49.450 "avg_latency_us": 23785.763472416493, 00:18:49.450 "min_latency_us": 5367.710476190477, 00:18:49.450 "max_latency_us": 35451.85523809524 00:18:49.450 } 00:18:49.450 ], 00:18:49.450 "core_count": 1 00:18:49.450 } 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:49.450 nvmf_trace.0 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1539959 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1539959 ']' 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1539959 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539959 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539959' 00:18:49.450 killing process with pid 1539959 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1539959 00:18:49.450 Received shutdown signal, test time was about 1.000000 seconds 00:18:49.450 00:18:49.450 Latency(us) 00:18:49.450 [2024-12-12T09:32:23.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.450 [2024-12-12T09:32:23.473Z] =================================================================================================================== 00:18:49.450 [2024-12-12T09:32:23.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1539959 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:49.450 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:49.450 rmmod nvme_tcp 00:18:49.450 rmmod nvme_fabrics 00:18:49.450 rmmod nvme_keyring 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1539726 ']' 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1539726 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1539726 ']' 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1539726 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539726 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539726' 00:18:49.709 killing process with pid 1539726 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1539726 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1539726 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.709 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.muzr3IAila /tmp/tmp.KrssVhLo4a /tmp/tmp.kXyiquXCpP 00:18:52.245 00:18:52.245 real 1m19.931s 00:18:52.245 user 2m2.122s 00:18:52.245 sys 0m31.155s 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.245 ************************************ 00:18:52.245 END TEST nvmf_tls 00:18:52.245 ************************************ 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.245 ************************************ 00:18:52.245 START TEST nvmf_fips 00:18:52.245 ************************************ 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:52.245 * Looking for test storage... 00:18:52.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:18:52.245 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:52.245 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:52.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.246 --rc genhtml_branch_coverage=1 00:18:52.246 --rc genhtml_function_coverage=1 00:18:52.246 --rc genhtml_legend=1 00:18:52.246 --rc geninfo_all_blocks=1 00:18:52.246 --rc geninfo_unexecuted_blocks=1 00:18:52.246 00:18:52.246 ' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:52.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.246 --rc genhtml_branch_coverage=1 00:18:52.246 --rc genhtml_function_coverage=1 00:18:52.246 --rc genhtml_legend=1 00:18:52.246 --rc geninfo_all_blocks=1 00:18:52.246 --rc geninfo_unexecuted_blocks=1 00:18:52.246 00:18:52.246 ' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:52.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.246 --rc genhtml_branch_coverage=1 00:18:52.246 --rc genhtml_function_coverage=1 00:18:52.246 --rc genhtml_legend=1 00:18:52.246 --rc geninfo_all_blocks=1 00:18:52.246 --rc geninfo_unexecuted_blocks=1 00:18:52.246 00:18:52.246 ' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:52.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.246 --rc genhtml_branch_coverage=1 00:18:52.246 --rc genhtml_function_coverage=1 00:18:52.246 --rc genhtml_legend=1 00:18:52.246 --rc geninfo_all_blocks=1 00:18:52.246 --rc geninfo_unexecuted_blocks=1 00:18:52.246 00:18:52.246 ' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:52.246 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:52.247 Error setting digest 00:18:52.247 400246FBF77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:52.247 400246FBF77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:52.247 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.820 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:58.821 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:58.821 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:58.821 Found net devices under 0000:af:00.0: cvl_0_0 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:58.821 Found net devices under 0000:af:00.1: cvl_0_1 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:58.821 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:58.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:18:58.821 00:18:58.821 --- 10.0.0.2 ping statistics --- 00:18:58.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.821 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:18:58.821 00:18:58.821 --- 10.0.0.1 ping statistics --- 00:18:58.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.821 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1543906 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1543906 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1543906 ']' 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.821 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:58.821 [2024-12-12 10:32:32.197862] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:58.821 [2024-12-12 10:32:32.197907] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.821 [2024-12-12 10:32:32.276532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.821 [2024-12-12 10:32:32.315457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.821 [2024-12-12 10:32:32.315491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.821 [2024-12-12 10:32:32.315498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.821 [2024-12-12 10:32:32.315503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.821 [2024-12-12 10:32:32.315508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.821 [2024-12-12 10:32:32.316005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.080 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.080 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:59.080 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:59.080 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.080 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:59.080 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.080 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:59.081 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:59.081 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:59.081 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.H3T 00:18:59.081 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:59.081 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.H3T 00:18:59.081 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.H3T 00:18:59.081 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.H3T 00:18:59.081 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.340 [2024-12-12 10:32:33.236252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.340 [2024-12-12 10:32:33.252264] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.340 [2024-12-12 10:32:33.252418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.340 malloc0 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1544148 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1544148 /var/tmp/bdevperf.sock 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1544148 ']' 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.340 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:59.599 [2024-12-12 10:32:33.379626] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:18:59.599 [2024-12-12 10:32:33.379679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544148 ] 00:18:59.599 [2024-12-12 10:32:33.451693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.599 [2024-12-12 10:32:33.491469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.534 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.534 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:00.534 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.H3T 00:19:00.534 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:00.792 [2024-12-12 10:32:34.581795] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.792 TLSTESTn1 00:19:00.792 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.792 Running I/O for 10 seconds... 00:19:03.105 5259.00 IOPS, 20.54 MiB/s [2024-12-12T09:32:38.065Z] 5372.00 IOPS, 20.98 MiB/s [2024-12-12T09:32:39.001Z] 5451.00 IOPS, 21.29 MiB/s [2024-12-12T09:32:39.937Z] 5477.25 IOPS, 21.40 MiB/s [2024-12-12T09:32:40.872Z] 5506.00 IOPS, 21.51 MiB/s [2024-12-12T09:32:41.808Z] 5502.33 IOPS, 21.49 MiB/s [2024-12-12T09:32:43.184Z] 5532.14 IOPS, 21.61 MiB/s [2024-12-12T09:32:44.120Z] 5544.62 IOPS, 21.66 MiB/s [2024-12-12T09:32:45.056Z] 5509.67 IOPS, 21.52 MiB/s [2024-12-12T09:32:45.056Z] 5461.80 IOPS, 21.34 MiB/s 00:19:11.033 Latency(us) 00:19:11.033 [2024-12-12T09:32:45.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.033 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.033 Verification LBA range: start 0x0 length 0x2000 00:19:11.033 TLSTESTn1 : 10.02 5465.48 21.35 0.00 0.00 23384.31 5118.05 40195.41 00:19:11.033 [2024-12-12T09:32:45.056Z] =================================================================================================================== 00:19:11.033 [2024-12-12T09:32:45.056Z] Total : 5465.48 21.35 0.00 0.00 23384.31 5118.05 40195.41 00:19:11.033 { 00:19:11.033 "results": [ 00:19:11.033 { 00:19:11.033 "job": "TLSTESTn1", 00:19:11.033 "core_mask": "0x4", 00:19:11.033 "workload": "verify", 00:19:11.033 "status": "finished", 00:19:11.033 "verify_range": { 00:19:11.033 "start": 0, 00:19:11.033 "length": 8192 00:19:11.033 }, 00:19:11.033 "queue_depth": 128, 00:19:11.033 "io_size": 4096, 00:19:11.033 "runtime": 10.016503, 00:19:11.033 "iops": 5465.480317831482, 00:19:11.033 "mibps": 21.349532491529228, 00:19:11.033 "io_failed": 0, 00:19:11.033 "io_timeout": 0, 00:19:11.033 "avg_latency_us": 23384.305613924298, 00:19:11.033 "min_latency_us": 5118.049523809524, 00:19:11.033 "max_latency_us": 40195.41333333333 00:19:11.033 } 00:19:11.033 ], 00:19:11.033 "core_count": 1 00:19:11.033 } 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:11.033 nvmf_trace.0 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1544148 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1544148 ']' 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1544148 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1544148 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1544148' 00:19:11.033 killing process with pid 1544148 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1544148 00:19:11.033 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.033 00:19:11.033 Latency(us) 00:19:11.033 [2024-12-12T09:32:45.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.033 [2024-12-12T09:32:45.056Z] =================================================================================================================== 00:19:11.033 [2024-12-12T09:32:45.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.033 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1544148 00:19:11.292 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:11.292 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.292 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:11.292 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.292 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:11.292 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.292 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.292 rmmod nvme_tcp 00:19:11.292 rmmod nvme_fabrics 00:19:11.292 rmmod nvme_keyring 00:19:11.292 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1543906 ']' 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1543906 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1543906 ']' 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1543906 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1543906 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1543906' 00:19:11.293 killing process with pid 1543906 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1543906 00:19:11.293 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1543906 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.551 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.455 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.455 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.H3T 00:19:13.715 00:19:13.715 real 0m21.636s 00:19:13.715 user 0m23.256s 00:19:13.715 sys 0m9.840s 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:13.715 ************************************ 00:19:13.715 END TEST nvmf_fips 00:19:13.715 ************************************ 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.715 ************************************ 00:19:13.715 START TEST nvmf_control_msg_list 00:19:13.715 ************************************ 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:13.715 * Looking for test storage... 00:19:13.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:13.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.715 --rc genhtml_branch_coverage=1 00:19:13.715 --rc genhtml_function_coverage=1 00:19:13.715 --rc genhtml_legend=1 00:19:13.715 --rc geninfo_all_blocks=1 00:19:13.715 --rc geninfo_unexecuted_blocks=1 00:19:13.715 00:19:13.715 ' 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:13.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.715 --rc genhtml_branch_coverage=1 00:19:13.715 --rc genhtml_function_coverage=1 00:19:13.715 --rc genhtml_legend=1 00:19:13.715 --rc geninfo_all_blocks=1 00:19:13.715 --rc geninfo_unexecuted_blocks=1 00:19:13.715 00:19:13.715 ' 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:13.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.715 --rc genhtml_branch_coverage=1 00:19:13.715 --rc genhtml_function_coverage=1 00:19:13.715 --rc genhtml_legend=1 00:19:13.715 --rc geninfo_all_blocks=1 00:19:13.715 --rc geninfo_unexecuted_blocks=1 00:19:13.715 00:19:13.715 ' 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:13.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.715 --rc genhtml_branch_coverage=1 00:19:13.715 --rc genhtml_function_coverage=1 00:19:13.715 --rc genhtml_legend=1 00:19:13.715 --rc geninfo_all_blocks=1 00:19:13.715 --rc geninfo_unexecuted_blocks=1 00:19:13.715 00:19:13.715 ' 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.715 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.976 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:19.338 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:19.338 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:19.338 Found net devices under 0000:af:00.0: cvl_0_0 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:19.338 Found net devices under 0000:af:00.1: cvl_0_1 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.338 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:19.339 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:19.339 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.339 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:19.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:19:19.598 00:19:19.598 --- 10.0.0.2 ping statistics --- 00:19:19.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.598 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:19:19.598 00:19:19.598 --- 10.0.0.1 ping statistics --- 00:19:19.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.598 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1549420 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1549420 00:19:19.598 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:19.857 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1549420 ']' 00:19:19.857 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.857 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.857 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.857 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.857 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.857 [2024-12-12 10:32:53.668266] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:19.857 [2024-12-12 10:32:53.668310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.857 [2024-12-12 10:32:53.745998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.857 [2024-12-12 10:32:53.785325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.857 [2024-12-12 10:32:53.785356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.857 [2024-12-12 10:32:53.785363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.857 [2024-12-12 10:32:53.785370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.857 [2024-12-12 10:32:53.785376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.857 [2024-12-12 10:32:53.785885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.116 [2024-12-12 10:32:53.934368] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.116 Malloc0 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.116 [2024-12-12 10:32:53.970689] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1549601 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1549603 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1549605 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1549601 00:19:20.116 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:20.116 [2024-12-12 10:32:54.049055] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:20.116 [2024-12-12 10:32:54.069114] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:20.116 [2024-12-12 10:32:54.069251] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:21.492 Initializing NVMe Controllers 00:19:21.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:21.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:21.492 Initialization complete. Launching workers. 00:19:21.492 ======================================================== 00:19:21.492 Latency(us) 00:19:21.492 Device Information : IOPS MiB/s Average min max 00:19:21.492 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 7914.00 30.91 126.04 114.75 342.01 00:19:21.492 ======================================================== 00:19:21.492 Total : 7914.00 30.91 126.04 114.75 342.01 00:19:21.492 00:19:21.492 Initializing NVMe Controllers 00:19:21.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:21.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:21.492 Initialization complete. Launching workers. 00:19:21.492 ======================================================== 00:19:21.492 Latency(us) 00:19:21.492 Device Information : IOPS MiB/s Average min max 00:19:21.492 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41273.73 40364.08 41931.49 00:19:21.492 ======================================================== 00:19:21.492 Total : 25.00 0.10 41273.73 40364.08 41931.49 00:19:21.492 00:19:21.492 Initializing NVMe Controllers 00:19:21.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:21.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:21.492 Initialization complete. Launching workers. 00:19:21.492 ======================================================== 00:19:21.492 Latency(us) 00:19:21.492 Device Information : IOPS MiB/s Average min max 00:19:21.492 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41194.58 40856.19 41919.86 00:19:21.492 ======================================================== 00:19:21.492 Total : 25.00 0.10 41194.58 40856.19 41919.86 00:19:21.492 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1549603 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1549605 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.492 rmmod nvme_tcp 00:19:21.492 rmmod nvme_fabrics 00:19:21.492 rmmod nvme_keyring 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1549420 ']' 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1549420 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1549420 ']' 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1549420 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1549420 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1549420' 00:19:21.492 killing process with pid 1549420 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1549420 00:19:21.492 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1549420 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.751 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:24.288 00:19:24.288 real 0m10.139s 00:19:24.288 user 0m7.009s 00:19:24.288 sys 0m5.465s 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.288 ************************************ 00:19:24.288 END TEST nvmf_control_msg_list 00:19:24.288 ************************************ 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.288 ************************************ 00:19:24.288 START TEST nvmf_wait_for_buf 00:19:24.288 ************************************ 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:24.288 * Looking for test storage... 00:19:24.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:24.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.288 --rc genhtml_branch_coverage=1 00:19:24.288 --rc genhtml_function_coverage=1 00:19:24.288 --rc genhtml_legend=1 00:19:24.288 --rc geninfo_all_blocks=1 00:19:24.288 --rc geninfo_unexecuted_blocks=1 00:19:24.288 00:19:24.288 ' 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:24.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.288 --rc genhtml_branch_coverage=1 00:19:24.288 --rc genhtml_function_coverage=1 00:19:24.288 --rc genhtml_legend=1 00:19:24.288 --rc geninfo_all_blocks=1 00:19:24.288 --rc geninfo_unexecuted_blocks=1 00:19:24.288 00:19:24.288 ' 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:24.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.288 --rc genhtml_branch_coverage=1 00:19:24.288 --rc genhtml_function_coverage=1 00:19:24.288 --rc genhtml_legend=1 00:19:24.288 --rc geninfo_all_blocks=1 00:19:24.288 --rc geninfo_unexecuted_blocks=1 00:19:24.288 00:19:24.288 ' 00:19:24.288 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:24.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.288 --rc genhtml_branch_coverage=1 00:19:24.288 --rc genhtml_function_coverage=1 00:19:24.289 --rc genhtml_legend=1 00:19:24.289 --rc geninfo_all_blocks=1 00:19:24.289 --rc geninfo_unexecuted_blocks=1 00:19:24.289 00:19:24.289 ' 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:24.289 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:29.618 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:29.618 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.618 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:29.619 Found net devices under 0000:af:00.0: cvl_0_0 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:29.619 Found net devices under 0000:af:00.1: cvl_0_1 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.619 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:29.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:19:29.878 00:19:29.878 --- 10.0.0.2 ping statistics --- 00:19:29.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.878 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:19:29.878 00:19:29.878 --- 10.0.0.1 ping statistics --- 00:19:29.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.878 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1553428 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1553428 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1553428 ']' 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.878 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.878 [2024-12-12 10:33:03.885316] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:29.878 [2024-12-12 10:33:03.885361] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.137 [2024-12-12 10:33:03.962405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.137 [2024-12-12 10:33:04.002058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.138 [2024-12-12 10:33:04.002092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.138 [2024-12-12 10:33:04.002099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.138 [2024-12-12 10:33:04.002105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.138 [2024-12-12 10:33:04.002110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.138 [2024-12-12 10:33:04.002614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.138 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.138 Malloc0 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.397 [2024-12-12 10:33:04.167897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.397 [2024-12-12 10:33:04.196084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.397 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:30.397 [2024-12-12 10:33:04.276040] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:31.772 Initializing NVMe Controllers 00:19:31.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:31.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:31.772 Initialization complete. Launching workers. 00:19:31.772 ======================================================== 00:19:31.772 Latency(us) 00:19:31.772 Device Information : IOPS MiB/s Average min max 00:19:31.772 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32270.47 7292.40 63847.38 00:19:31.772 ======================================================== 00:19:31.772 Total : 129.00 16.12 32270.47 7292.40 63847.38 00:19:31.772 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:31.772 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.773 rmmod nvme_tcp 00:19:31.773 rmmod nvme_fabrics 00:19:31.773 rmmod nvme_keyring 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1553428 ']' 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1553428 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1553428 ']' 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1553428 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.773 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1553428 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1553428' 00:19:32.032 killing process with pid 1553428 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1553428 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1553428 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.032 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.568 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.568 00:19:34.568 real 0m10.291s 00:19:34.568 user 0m3.892s 00:19:34.568 sys 0m4.813s 00:19:34.568 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.568 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.568 ************************************ 00:19:34.568 END TEST nvmf_wait_for_buf 00:19:34.568 ************************************ 00:19:34.568 10:33:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:34.568 10:33:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:34.568 10:33:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:34.568 10:33:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:34.568 10:33:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.568 10:33:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:39.859 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:39.859 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:39.859 Found net devices under 0000:af:00.0: cvl_0_0 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:39.859 Found net devices under 0000:af:00.1: cvl_0_1 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.859 ************************************ 00:19:39.859 START TEST nvmf_perf_adq 00:19:39.859 ************************************ 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:39.859 * Looking for test storage... 00:19:39.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:39.859 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.119 --rc genhtml_branch_coverage=1 00:19:40.119 --rc genhtml_function_coverage=1 00:19:40.119 --rc genhtml_legend=1 00:19:40.119 --rc geninfo_all_blocks=1 00:19:40.119 --rc geninfo_unexecuted_blocks=1 00:19:40.119 00:19:40.119 ' 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.119 --rc genhtml_branch_coverage=1 00:19:40.119 --rc genhtml_function_coverage=1 00:19:40.119 --rc genhtml_legend=1 00:19:40.119 --rc geninfo_all_blocks=1 00:19:40.119 --rc geninfo_unexecuted_blocks=1 00:19:40.119 00:19:40.119 ' 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.119 --rc genhtml_branch_coverage=1 00:19:40.119 --rc genhtml_function_coverage=1 00:19:40.119 --rc genhtml_legend=1 00:19:40.119 --rc geninfo_all_blocks=1 00:19:40.119 --rc geninfo_unexecuted_blocks=1 00:19:40.119 00:19:40.119 ' 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.119 --rc genhtml_branch_coverage=1 00:19:40.119 --rc genhtml_function_coverage=1 00:19:40.119 --rc genhtml_legend=1 00:19:40.119 --rc geninfo_all_blocks=1 00:19:40.119 --rc geninfo_unexecuted_blocks=1 00:19:40.119 00:19:40.119 ' 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.119 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.120 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:46.687 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:46.687 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:46.688 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:46.688 Found net devices under 0000:af:00.0: cvl_0_0 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:46.688 Found net devices under 0000:af:00.1: cvl_0_1 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:46.688 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:46.946 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:49.482 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:54.758 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:54.759 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:54.759 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:54.759 Found net devices under 0000:af:00.0: cvl_0_0 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:54.759 Found net devices under 0000:af:00.1: cvl_0_1 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:54.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:19:54.759 00:19:54.759 --- 10.0.0.2 ping statistics --- 00:19:54.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.759 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:19:54.759 00:19:54.759 --- 10.0.0.1 ping statistics --- 00:19:54.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.759 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1562175 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1562175 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1562175 ']' 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.759 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.760 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.760 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.760 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.760 [2024-12-12 10:33:28.667661] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:19:54.760 [2024-12-12 10:33:28.667705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.760 [2024-12-12 10:33:28.745592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.019 [2024-12-12 10:33:28.789295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.019 [2024-12-12 10:33:28.789330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.019 [2024-12-12 10:33:28.789337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.019 [2024-12-12 10:33:28.789342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.019 [2024-12-12 10:33:28.789347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.019 [2024-12-12 10:33:28.790831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.019 [2024-12-12 10:33:28.790933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.019 [2024-12-12 10:33:28.791038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.019 [2024-12-12 10:33:28.791040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.019 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.019 [2024-12-12 10:33:29.001333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.019 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.019 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:55.020 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.020 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.020 Malloc1 00:19:55.020 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.020 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.020 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.020 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.278 [2024-12-12 10:33:29.058552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1562327 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:55.278 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:57.180 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:57.180 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.180 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:57.180 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.180 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:57.180 "tick_rate": 2100000000, 00:19:57.180 "poll_groups": [ 00:19:57.180 { 00:19:57.180 "name": "nvmf_tgt_poll_group_000", 00:19:57.180 "admin_qpairs": 1, 00:19:57.180 "io_qpairs": 1, 00:19:57.181 "current_admin_qpairs": 1, 00:19:57.181 "current_io_qpairs": 1, 00:19:57.181 "pending_bdev_io": 0, 00:19:57.181 "completed_nvme_io": 20215, 00:19:57.181 "transports": [ 00:19:57.181 { 00:19:57.181 "trtype": "TCP" 00:19:57.181 } 00:19:57.181 ] 00:19:57.181 }, 00:19:57.181 { 00:19:57.181 "name": "nvmf_tgt_poll_group_001", 00:19:57.181 "admin_qpairs": 0, 00:19:57.181 "io_qpairs": 1, 00:19:57.181 "current_admin_qpairs": 0, 00:19:57.181 "current_io_qpairs": 1, 00:19:57.181 "pending_bdev_io": 0, 00:19:57.181 "completed_nvme_io": 20138, 00:19:57.181 "transports": [ 00:19:57.181 { 00:19:57.181 "trtype": "TCP" 00:19:57.181 } 00:19:57.181 ] 00:19:57.181 }, 00:19:57.181 { 00:19:57.181 "name": "nvmf_tgt_poll_group_002", 00:19:57.181 "admin_qpairs": 0, 00:19:57.181 "io_qpairs": 1, 00:19:57.181 "current_admin_qpairs": 0, 00:19:57.181 "current_io_qpairs": 1, 00:19:57.181 "pending_bdev_io": 0, 00:19:57.181 "completed_nvme_io": 20315, 00:19:57.181 "transports": [ 00:19:57.181 { 00:19:57.181 "trtype": "TCP" 00:19:57.181 } 00:19:57.181 ] 00:19:57.181 }, 00:19:57.181 { 00:19:57.181 "name": "nvmf_tgt_poll_group_003", 00:19:57.181 "admin_qpairs": 0, 00:19:57.181 "io_qpairs": 1, 00:19:57.181 "current_admin_qpairs": 0, 00:19:57.181 "current_io_qpairs": 1, 00:19:57.181 "pending_bdev_io": 0, 00:19:57.181 "completed_nvme_io": 19962, 00:19:57.181 "transports": [ 00:19:57.181 { 00:19:57.181 "trtype": "TCP" 00:19:57.181 } 00:19:57.181 ] 00:19:57.181 } 00:19:57.181 ] 00:19:57.181 }' 00:19:57.181 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:57.181 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:57.181 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:57.181 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:57.181 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1562327 00:20:05.303 Initializing NVMe Controllers 00:20:05.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:05.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:05.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:05.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:05.303 Initialization complete. Launching workers. 00:20:05.303 ======================================================== 00:20:05.303 Latency(us) 00:20:05.303 Device Information : IOPS MiB/s Average min max 00:20:05.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10803.50 42.20 5925.46 2324.30 10117.12 00:20:05.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10684.30 41.74 5990.89 2238.00 10627.70 00:20:05.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10718.30 41.87 5972.61 2397.45 10530.20 00:20:05.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10609.10 41.44 6033.30 2397.66 10589.34 00:20:05.303 ======================================================== 00:20:05.303 Total : 42815.20 167.25 5980.31 2238.00 10627.70 00:20:05.303 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:05.303 rmmod nvme_tcp 00:20:05.303 rmmod nvme_fabrics 00:20:05.303 rmmod nvme_keyring 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1562175 ']' 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1562175 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1562175 ']' 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1562175 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.303 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1562175 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1562175' 00:20:05.563 killing process with pid 1562175 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1562175 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1562175 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.563 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.101 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:08.101 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:08.101 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:08.101 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:09.038 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:11.576 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.858 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:16.859 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:16.859 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:16.859 Found net devices under 0000:af:00.0: cvl_0_0 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:16.859 Found net devices under 0000:af:00.1: cvl_0_1 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:16.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:20:16.859 00:20:16.859 --- 10.0.0.2 ping statistics --- 00:20:16.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.859 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:20:16.859 00:20:16.859 --- 10.0.0.1 ping statistics --- 00:20:16.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.859 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:16.859 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:16.860 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:16.860 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:16.860 net.core.busy_poll = 1 00:20:16.860 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:16.860 net.core.busy_read = 1 00:20:16.860 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:16.860 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:17.120 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:17.120 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:17.120 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1566226 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1566226 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1566226 ']' 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.120 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.120 [2024-12-12 10:33:51.075220] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:20:17.120 [2024-12-12 10:33:51.075264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.379 [2024-12-12 10:33:51.152733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.379 [2024-12-12 10:33:51.196095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.379 [2024-12-12 10:33:51.196130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.379 [2024-12-12 10:33:51.196137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.379 [2024-12-12 10:33:51.196146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.379 [2024-12-12 10:33:51.196151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.380 [2024-12-12 10:33:51.197464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.380 [2024-12-12 10:33:51.197588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.380 [2024-12-12 10:33:51.197656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.380 [2024-12-12 10:33:51.197657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.948 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.208 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:18.208 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:18.208 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.208 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.208 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.208 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:18.208 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.208 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.208 [2024-12-12 10:33:52.077849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.208 Malloc1 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.208 [2024-12-12 10:33:52.138401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1566468 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:18.208 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:20.741 "tick_rate": 2100000000, 00:20:20.741 "poll_groups": [ 00:20:20.741 { 00:20:20.741 "name": "nvmf_tgt_poll_group_000", 00:20:20.741 "admin_qpairs": 1, 00:20:20.741 "io_qpairs": 1, 00:20:20.741 "current_admin_qpairs": 1, 00:20:20.741 "current_io_qpairs": 1, 00:20:20.741 "pending_bdev_io": 0, 00:20:20.741 "completed_nvme_io": 28936, 00:20:20.741 "transports": [ 00:20:20.741 { 00:20:20.741 "trtype": "TCP" 00:20:20.741 } 00:20:20.741 ] 00:20:20.741 }, 00:20:20.741 { 00:20:20.741 "name": "nvmf_tgt_poll_group_001", 00:20:20.741 "admin_qpairs": 0, 00:20:20.741 "io_qpairs": 3, 00:20:20.741 "current_admin_qpairs": 0, 00:20:20.741 "current_io_qpairs": 3, 00:20:20.741 "pending_bdev_io": 0, 00:20:20.741 "completed_nvme_io": 29419, 00:20:20.741 "transports": [ 00:20:20.741 { 00:20:20.741 "trtype": "TCP" 00:20:20.741 } 00:20:20.741 ] 00:20:20.741 }, 00:20:20.741 { 00:20:20.741 "name": "nvmf_tgt_poll_group_002", 00:20:20.741 "admin_qpairs": 0, 00:20:20.741 "io_qpairs": 0, 00:20:20.741 "current_admin_qpairs": 0, 00:20:20.741 "current_io_qpairs": 0, 00:20:20.741 "pending_bdev_io": 0, 00:20:20.741 "completed_nvme_io": 0, 00:20:20.741 "transports": [ 00:20:20.741 { 00:20:20.741 "trtype": "TCP" 00:20:20.741 } 00:20:20.741 ] 00:20:20.741 }, 00:20:20.741 { 00:20:20.741 "name": "nvmf_tgt_poll_group_003", 00:20:20.741 "admin_qpairs": 0, 00:20:20.741 "io_qpairs": 0, 00:20:20.741 "current_admin_qpairs": 0, 00:20:20.741 "current_io_qpairs": 0, 00:20:20.741 "pending_bdev_io": 0, 00:20:20.741 "completed_nvme_io": 0, 00:20:20.741 "transports": [ 00:20:20.741 { 00:20:20.741 "trtype": "TCP" 00:20:20.741 } 00:20:20.741 ] 00:20:20.741 } 00:20:20.741 ] 00:20:20.741 }' 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:20.741 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1566468 00:20:28.861 Initializing NVMe Controllers 00:20:28.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:28.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:28.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:28.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:28.861 Initialization complete. Launching workers. 00:20:28.861 ======================================================== 00:20:28.861 Latency(us) 00:20:28.861 Device Information : IOPS MiB/s Average min max 00:20:28.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5547.50 21.67 11556.71 1795.13 60628.21 00:20:28.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5150.00 20.12 12425.53 1929.99 58350.59 00:20:28.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 15546.30 60.73 4116.28 1485.32 6717.06 00:20:28.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4754.70 18.57 13514.22 1452.36 59629.99 00:20:28.861 ======================================================== 00:20:28.861 Total : 30998.50 121.09 8269.80 1452.36 60628.21 00:20:28.861 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:28.861 rmmod nvme_tcp 00:20:28.861 rmmod nvme_fabrics 00:20:28.861 rmmod nvme_keyring 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1566226 ']' 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1566226 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1566226 ']' 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1566226 00:20:28.861 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566226 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566226' 00:20:28.862 killing process with pid 1566226 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1566226 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1566226 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.862 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:32.313 00:20:32.313 real 0m52.010s 00:20:32.313 user 2m46.807s 00:20:32.313 sys 0m10.386s 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.313 ************************************ 00:20:32.313 END TEST nvmf_perf_adq 00:20:32.313 ************************************ 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:32.313 ************************************ 00:20:32.313 START TEST nvmf_shutdown 00:20:32.313 ************************************ 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:32.313 * Looking for test storage... 00:20:32.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:32.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.313 --rc genhtml_branch_coverage=1 00:20:32.313 --rc genhtml_function_coverage=1 00:20:32.313 --rc genhtml_legend=1 00:20:32.313 --rc geninfo_all_blocks=1 00:20:32.313 --rc geninfo_unexecuted_blocks=1 00:20:32.313 00:20:32.313 ' 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:32.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.313 --rc genhtml_branch_coverage=1 00:20:32.313 --rc genhtml_function_coverage=1 00:20:32.313 --rc genhtml_legend=1 00:20:32.313 --rc geninfo_all_blocks=1 00:20:32.313 --rc geninfo_unexecuted_blocks=1 00:20:32.313 00:20:32.313 ' 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:32.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.313 --rc genhtml_branch_coverage=1 00:20:32.313 --rc genhtml_function_coverage=1 00:20:32.313 --rc genhtml_legend=1 00:20:32.313 --rc geninfo_all_blocks=1 00:20:32.313 --rc geninfo_unexecuted_blocks=1 00:20:32.313 00:20:32.313 ' 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:32.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.313 --rc genhtml_branch_coverage=1 00:20:32.313 --rc genhtml_function_coverage=1 00:20:32.313 --rc genhtml_legend=1 00:20:32.313 --rc geninfo_all_blocks=1 00:20:32.313 --rc geninfo_unexecuted_blocks=1 00:20:32.313 00:20:32.313 ' 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.313 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.313 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:32.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:32.314 ************************************ 00:20:32.314 START TEST nvmf_shutdown_tc1 00:20:32.314 ************************************ 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:32.314 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:38.882 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.882 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:38.882 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:38.883 Found net devices under 0000:af:00.0: cvl_0_0 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:38.883 Found net devices under 0000:af:00.1: cvl_0_1 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:20:38.883 00:20:38.883 --- 10.0.0.2 ping statistics --- 00:20:38.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.883 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:20:38.883 00:20:38.883 --- 10.0.0.1 ping statistics --- 00:20:38.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.883 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.883 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1571821 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1571821 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1571821 ']' 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.883 [2024-12-12 10:34:12.063642] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:20:38.883 [2024-12-12 10:34:12.063686] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.883 [2024-12-12 10:34:12.139746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.883 [2024-12-12 10:34:12.180154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.883 [2024-12-12 10:34:12.180189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.883 [2024-12-12 10:34:12.180196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.883 [2024-12-12 10:34:12.180201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.883 [2024-12-12 10:34:12.180206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.883 [2024-12-12 10:34:12.181725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.883 [2024-12-12 10:34:12.181833] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.883 [2024-12-12 10:34:12.181940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.883 [2024-12-12 10:34:12.181942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.883 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.884 [2024-12-12 10:34:12.317782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.884 Malloc1 00:20:38.884 [2024-12-12 10:34:12.430925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.884 Malloc2 00:20:38.884 Malloc3 00:20:38.884 Malloc4 00:20:38.884 Malloc5 00:20:38.884 Malloc6 00:20:38.884 Malloc7 00:20:38.884 Malloc8 00:20:38.884 Malloc9 00:20:38.884 Malloc10 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1572086 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1572086 /var/tmp/bdevperf.sock 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1572086 ']' 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.884 { 00:20:38.884 "params": { 00:20:38.884 "name": "Nvme$subsystem", 00:20:38.884 "trtype": "$TEST_TRANSPORT", 00:20:38.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.884 "adrfam": "ipv4", 00:20:38.884 "trsvcid": "$NVMF_PORT", 00:20:38.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.884 "hdgst": ${hdgst:-false}, 00:20:38.884 "ddgst": ${ddgst:-false} 00:20:38.884 }, 00:20:38.884 "method": "bdev_nvme_attach_controller" 00:20:38.884 } 00:20:38.884 EOF 00:20:38.884 )") 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.884 { 00:20:38.884 "params": { 00:20:38.884 "name": "Nvme$subsystem", 00:20:38.884 "trtype": "$TEST_TRANSPORT", 00:20:38.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.884 "adrfam": "ipv4", 00:20:38.884 "trsvcid": "$NVMF_PORT", 00:20:38.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.884 "hdgst": ${hdgst:-false}, 00:20:38.884 "ddgst": ${ddgst:-false} 00:20:38.884 }, 00:20:38.884 "method": "bdev_nvme_attach_controller" 00:20:38.884 } 00:20:38.884 EOF 00:20:38.884 )") 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.884 { 00:20:38.884 "params": { 00:20:38.884 "name": "Nvme$subsystem", 00:20:38.884 "trtype": "$TEST_TRANSPORT", 00:20:38.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.884 "adrfam": "ipv4", 00:20:38.884 "trsvcid": "$NVMF_PORT", 00:20:38.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.884 "hdgst": ${hdgst:-false}, 00:20:38.884 "ddgst": ${ddgst:-false} 00:20:38.884 }, 00:20:38.884 "method": "bdev_nvme_attach_controller" 00:20:38.884 } 00:20:38.884 EOF 00:20:38.884 )") 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.884 { 00:20:38.884 "params": { 00:20:38.884 "name": "Nvme$subsystem", 00:20:38.884 "trtype": "$TEST_TRANSPORT", 00:20:38.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.884 "adrfam": "ipv4", 00:20:38.884 "trsvcid": "$NVMF_PORT", 00:20:38.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.884 "hdgst": ${hdgst:-false}, 00:20:38.884 "ddgst": ${ddgst:-false} 00:20:38.884 }, 00:20:38.884 "method": "bdev_nvme_attach_controller" 00:20:38.884 } 00:20:38.884 EOF 00:20:38.884 )") 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.884 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.884 { 00:20:38.884 "params": { 00:20:38.884 "name": "Nvme$subsystem", 00:20:38.884 "trtype": "$TEST_TRANSPORT", 00:20:38.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.884 "adrfam": "ipv4", 00:20:38.884 "trsvcid": "$NVMF_PORT", 00:20:38.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.884 "hdgst": ${hdgst:-false}, 00:20:38.884 "ddgst": ${ddgst:-false} 00:20:38.885 }, 00:20:38.885 "method": "bdev_nvme_attach_controller" 00:20:38.885 } 00:20:38.885 EOF 00:20:38.885 )") 00:20:38.885 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.885 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.885 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.885 { 00:20:38.885 "params": { 00:20:38.885 "name": "Nvme$subsystem", 00:20:38.885 "trtype": "$TEST_TRANSPORT", 00:20:38.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.885 "adrfam": "ipv4", 00:20:38.885 "trsvcid": "$NVMF_PORT", 00:20:38.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.885 "hdgst": ${hdgst:-false}, 00:20:38.885 "ddgst": ${ddgst:-false} 00:20:38.885 }, 00:20:38.885 "method": "bdev_nvme_attach_controller" 00:20:38.885 } 00:20:38.885 EOF 00:20:38.885 )") 00:20:38.885 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.885 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.885 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.885 { 00:20:38.885 "params": { 00:20:38.885 "name": "Nvme$subsystem", 00:20:38.885 "trtype": "$TEST_TRANSPORT", 00:20:38.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.885 "adrfam": "ipv4", 00:20:38.885 "trsvcid": "$NVMF_PORT", 00:20:38.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.885 "hdgst": ${hdgst:-false}, 00:20:38.885 "ddgst": ${ddgst:-false} 00:20:38.885 }, 00:20:38.885 "method": "bdev_nvme_attach_controller" 00:20:38.885 } 00:20:38.885 EOF 00:20:38.885 )") 00:20:38.885 [2024-12-12 10:34:12.901248] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:20:38.885 [2024-12-12 10:34:12.901296] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:38.885 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.143 { 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme$subsystem", 00:20:39.143 "trtype": "$TEST_TRANSPORT", 00:20:39.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "$NVMF_PORT", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.143 "hdgst": ${hdgst:-false}, 00:20:39.143 "ddgst": ${ddgst:-false} 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 } 00:20:39.143 EOF 00:20:39.143 )") 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.143 { 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme$subsystem", 00:20:39.143 "trtype": "$TEST_TRANSPORT", 00:20:39.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "$NVMF_PORT", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.143 "hdgst": ${hdgst:-false}, 00:20:39.143 "ddgst": ${ddgst:-false} 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 } 00:20:39.143 EOF 00:20:39.143 )") 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:39.143 { 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme$subsystem", 00:20:39.143 "trtype": "$TEST_TRANSPORT", 00:20:39.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "$NVMF_PORT", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.143 "hdgst": ${hdgst:-false}, 00:20:39.143 "ddgst": ${ddgst:-false} 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 } 00:20:39.143 EOF 00:20:39.143 )") 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:39.143 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme1", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 },{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme2", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 },{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme3", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 },{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme4", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 },{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme5", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 },{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme6", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 },{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme7", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 },{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme8", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 },{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme9", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 },{ 00:20:39.143 "params": { 00:20:39.143 "name": "Nvme10", 00:20:39.143 "trtype": "tcp", 00:20:39.143 "traddr": "10.0.0.2", 00:20:39.143 "adrfam": "ipv4", 00:20:39.143 "trsvcid": "4420", 00:20:39.143 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:39.143 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:39.143 "hdgst": false, 00:20:39.143 "ddgst": false 00:20:39.143 }, 00:20:39.143 "method": "bdev_nvme_attach_controller" 00:20:39.143 }' 00:20:39.143 [2024-12-12 10:34:12.977209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.143 [2024-12-12 10:34:13.018363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.041 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.041 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:41.041 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:41.041 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.041 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.041 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.041 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1572086 00:20:41.041 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:41.041 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:41.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1572086 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1571821 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.974 { 00:20:41.974 "params": { 00:20:41.974 "name": "Nvme$subsystem", 00:20:41.974 "trtype": "$TEST_TRANSPORT", 00:20:41.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.974 "adrfam": "ipv4", 00:20:41.974 "trsvcid": "$NVMF_PORT", 00:20:41.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.974 "hdgst": ${hdgst:-false}, 00:20:41.974 "ddgst": ${ddgst:-false} 00:20:41.974 }, 00:20:41.974 "method": "bdev_nvme_attach_controller" 00:20:41.974 } 00:20:41.974 EOF 00:20:41.974 )") 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.974 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.974 { 00:20:41.974 "params": { 00:20:41.974 "name": "Nvme$subsystem", 00:20:41.974 "trtype": "$TEST_TRANSPORT", 00:20:41.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "$NVMF_PORT", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.975 "hdgst": ${hdgst:-false}, 00:20:41.975 "ddgst": ${ddgst:-false} 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 } 00:20:41.975 EOF 00:20:41.975 )") 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.975 { 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme$subsystem", 00:20:41.975 "trtype": "$TEST_TRANSPORT", 00:20:41.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "$NVMF_PORT", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.975 "hdgst": ${hdgst:-false}, 00:20:41.975 "ddgst": ${ddgst:-false} 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 } 00:20:41.975 EOF 00:20:41.975 )") 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.975 { 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme$subsystem", 00:20:41.975 "trtype": "$TEST_TRANSPORT", 00:20:41.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "$NVMF_PORT", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.975 "hdgst": ${hdgst:-false}, 00:20:41.975 "ddgst": ${ddgst:-false} 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 } 00:20:41.975 EOF 00:20:41.975 )") 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.975 { 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme$subsystem", 00:20:41.975 "trtype": "$TEST_TRANSPORT", 00:20:41.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "$NVMF_PORT", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.975 "hdgst": ${hdgst:-false}, 00:20:41.975 "ddgst": ${ddgst:-false} 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 } 00:20:41.975 EOF 00:20:41.975 )") 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.975 { 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme$subsystem", 00:20:41.975 "trtype": "$TEST_TRANSPORT", 00:20:41.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "$NVMF_PORT", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.975 "hdgst": ${hdgst:-false}, 00:20:41.975 "ddgst": ${ddgst:-false} 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 } 00:20:41.975 EOF 00:20:41.975 )") 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.975 { 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme$subsystem", 00:20:41.975 "trtype": "$TEST_TRANSPORT", 00:20:41.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "$NVMF_PORT", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.975 "hdgst": ${hdgst:-false}, 00:20:41.975 "ddgst": ${ddgst:-false} 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 } 00:20:41.975 EOF 00:20:41.975 )") 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.975 [2024-12-12 10:34:15.825488] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:20:41.975 [2024-12-12 10:34:15.825537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572563 ] 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.975 { 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme$subsystem", 00:20:41.975 "trtype": "$TEST_TRANSPORT", 00:20:41.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "$NVMF_PORT", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.975 "hdgst": ${hdgst:-false}, 00:20:41.975 "ddgst": ${ddgst:-false} 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 } 00:20:41.975 EOF 00:20:41.975 )") 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.975 { 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme$subsystem", 00:20:41.975 "trtype": "$TEST_TRANSPORT", 00:20:41.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "$NVMF_PORT", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.975 "hdgst": ${hdgst:-false}, 00:20:41.975 "ddgst": ${ddgst:-false} 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 } 00:20:41.975 EOF 00:20:41.975 )") 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.975 { 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme$subsystem", 00:20:41.975 "trtype": "$TEST_TRANSPORT", 00:20:41.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "$NVMF_PORT", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.975 "hdgst": ${hdgst:-false}, 00:20:41.975 "ddgst": ${ddgst:-false} 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 } 00:20:41.975 EOF 00:20:41.975 )") 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:41.975 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme1", 00:20:41.975 "trtype": "tcp", 00:20:41.975 "traddr": "10.0.0.2", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "4420", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.975 "hdgst": false, 00:20:41.975 "ddgst": false 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 },{ 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme2", 00:20:41.975 "trtype": "tcp", 00:20:41.975 "traddr": "10.0.0.2", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "4420", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:41.975 "hdgst": false, 00:20:41.975 "ddgst": false 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 },{ 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme3", 00:20:41.975 "trtype": "tcp", 00:20:41.975 "traddr": "10.0.0.2", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "4420", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:41.975 "hdgst": false, 00:20:41.975 "ddgst": false 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 },{ 00:20:41.975 "params": { 00:20:41.975 "name": "Nvme4", 00:20:41.975 "trtype": "tcp", 00:20:41.975 "traddr": "10.0.0.2", 00:20:41.975 "adrfam": "ipv4", 00:20:41.975 "trsvcid": "4420", 00:20:41.975 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:41.975 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:41.975 "hdgst": false, 00:20:41.975 "ddgst": false 00:20:41.975 }, 00:20:41.975 "method": "bdev_nvme_attach_controller" 00:20:41.975 },{ 00:20:41.976 "params": { 00:20:41.976 "name": "Nvme5", 00:20:41.976 "trtype": "tcp", 00:20:41.976 "traddr": "10.0.0.2", 00:20:41.976 "adrfam": "ipv4", 00:20:41.976 "trsvcid": "4420", 00:20:41.976 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:41.976 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:41.976 "hdgst": false, 00:20:41.976 "ddgst": false 00:20:41.976 }, 00:20:41.976 "method": "bdev_nvme_attach_controller" 00:20:41.976 },{ 00:20:41.976 "params": { 00:20:41.976 "name": "Nvme6", 00:20:41.976 "trtype": "tcp", 00:20:41.976 "traddr": "10.0.0.2", 00:20:41.976 "adrfam": "ipv4", 00:20:41.976 "trsvcid": "4420", 00:20:41.976 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:41.976 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:41.976 "hdgst": false, 00:20:41.976 "ddgst": false 00:20:41.976 }, 00:20:41.976 "method": "bdev_nvme_attach_controller" 00:20:41.976 },{ 00:20:41.976 "params": { 00:20:41.976 "name": "Nvme7", 00:20:41.976 "trtype": "tcp", 00:20:41.976 "traddr": "10.0.0.2", 00:20:41.976 "adrfam": "ipv4", 00:20:41.976 "trsvcid": "4420", 00:20:41.976 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:41.976 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:41.976 "hdgst": false, 00:20:41.976 "ddgst": false 00:20:41.976 }, 00:20:41.976 "method": "bdev_nvme_attach_controller" 00:20:41.976 },{ 00:20:41.976 "params": { 00:20:41.976 "name": "Nvme8", 00:20:41.976 "trtype": "tcp", 00:20:41.976 "traddr": "10.0.0.2", 00:20:41.976 "adrfam": "ipv4", 00:20:41.976 "trsvcid": "4420", 00:20:41.976 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:41.976 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:41.976 "hdgst": false, 00:20:41.976 "ddgst": false 00:20:41.976 }, 00:20:41.976 "method": "bdev_nvme_attach_controller" 00:20:41.976 },{ 00:20:41.976 "params": { 00:20:41.976 "name": "Nvme9", 00:20:41.976 "trtype": "tcp", 00:20:41.976 "traddr": "10.0.0.2", 00:20:41.976 "adrfam": "ipv4", 00:20:41.976 "trsvcid": "4420", 00:20:41.976 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:41.976 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:41.976 "hdgst": false, 00:20:41.976 "ddgst": false 00:20:41.976 }, 00:20:41.976 "method": "bdev_nvme_attach_controller" 00:20:41.976 },{ 00:20:41.976 "params": { 00:20:41.976 "name": "Nvme10", 00:20:41.976 "trtype": "tcp", 00:20:41.976 "traddr": "10.0.0.2", 00:20:41.976 "adrfam": "ipv4", 00:20:41.976 "trsvcid": "4420", 00:20:41.976 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:41.976 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:41.976 "hdgst": false, 00:20:41.976 "ddgst": false 00:20:41.976 }, 00:20:41.976 "method": "bdev_nvme_attach_controller" 00:20:41.976 }' 00:20:41.976 [2024-12-12 10:34:15.901781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.976 [2024-12-12 10:34:15.942374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.873 Running I/O for 1 seconds... 00:20:44.808 2264.00 IOPS, 141.50 MiB/s 00:20:44.808 Latency(us) 00:20:44.808 [2024-12-12T09:34:18.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.808 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme1n1 : 1.08 237.40 14.84 0.00 0.00 266964.85 18599.74 222697.57 00:20:44.808 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme2n1 : 1.05 248.87 15.55 0.00 0.00 249451.31 6210.32 218702.99 00:20:44.808 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme3n1 : 1.08 301.32 18.83 0.00 0.00 200115.39 13731.35 206719.27 00:20:44.808 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme4n1 : 1.11 292.30 18.27 0.00 0.00 206961.39 5180.46 218702.99 00:20:44.808 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme5n1 : 1.11 296.69 18.54 0.00 0.00 199592.17 8363.64 208716.56 00:20:44.808 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme6n1 : 1.12 284.71 17.79 0.00 0.00 207154.52 15603.81 243669.09 00:20:44.808 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme7n1 : 1.12 286.61 17.91 0.00 0.00 202570.41 15603.81 216705.71 00:20:44.808 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme8n1 : 1.12 289.31 18.08 0.00 0.00 197535.56 2387.38 215707.06 00:20:44.808 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme9n1 : 1.13 282.91 17.68 0.00 0.00 198617.77 15603.81 201726.05 00:20:44.808 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.808 Verification LBA range: start 0x0 length 0x400 00:20:44.808 Nvme10n1 : 1.16 330.57 20.66 0.00 0.00 168649.47 5242.88 238675.87 00:20:44.808 [2024-12-12T09:34:18.831Z] =================================================================================================================== 00:20:44.808 [2024-12-12T09:34:18.831Z] Total : 2850.68 178.17 0.00 0.00 206984.04 2387.38 243669.09 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.066 rmmod nvme_tcp 00:20:45.066 rmmod nvme_fabrics 00:20:45.066 rmmod nvme_keyring 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1571821 ']' 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1571821 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1571821 ']' 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1571821 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.066 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1571821 00:20:45.066 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.066 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.066 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1571821' 00:20:45.066 killing process with pid 1571821 00:20:45.066 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1571821 00:20:45.066 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1571821 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.633 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:47.539 00:20:47.539 real 0m15.392s 00:20:47.539 user 0m34.853s 00:20:47.539 sys 0m5.749s 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.539 ************************************ 00:20:47.539 END TEST nvmf_shutdown_tc1 00:20:47.539 ************************************ 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:47.539 ************************************ 00:20:47.539 START TEST nvmf_shutdown_tc2 00:20:47.539 ************************************ 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.539 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:47.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:47.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:47.540 Found net devices under 0000:af:00.0: cvl_0_0 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:47.540 Found net devices under 0000:af:00.1: cvl_0_1 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.540 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.799 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.799 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.799 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.799 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.799 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.799 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.799 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:48.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:20:48.058 00:20:48.058 --- 10.0.0.2 ping statistics --- 00:20:48.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.058 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:20:48.058 00:20:48.058 --- 10.0.0.1 ping statistics --- 00:20:48.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.058 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1573575 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1573575 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1573575 ']' 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.058 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.059 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.059 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.059 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.059 [2024-12-12 10:34:21.993193] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:20:48.059 [2024-12-12 10:34:21.993234] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.059 [2024-12-12 10:34:22.068441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.317 [2024-12-12 10:34:22.111029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.317 [2024-12-12 10:34:22.111062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.317 [2024-12-12 10:34:22.111069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.317 [2024-12-12 10:34:22.111075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.317 [2024-12-12 10:34:22.111080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.317 [2024-12-12 10:34:22.112591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.317 [2024-12-12 10:34:22.112665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.317 [2024-12-12 10:34:22.112770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.317 [2024-12-12 10:34:22.112771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.317 [2024-12-12 10:34:22.249043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.317 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.575 Malloc1 00:20:48.575 [2024-12-12 10:34:22.362301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.575 Malloc2 00:20:48.575 Malloc3 00:20:48.575 Malloc4 00:20:48.575 Malloc5 00:20:48.575 Malloc6 00:20:48.575 Malloc7 00:20:48.833 Malloc8 00:20:48.833 Malloc9 00:20:48.833 Malloc10 00:20:48.833 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1573840 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1573840 /var/tmp/bdevperf.sock 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1573840 ']' 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.834 { 00:20:48.834 "params": { 00:20:48.834 "name": "Nvme$subsystem", 00:20:48.834 "trtype": "$TEST_TRANSPORT", 00:20:48.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.834 "adrfam": "ipv4", 00:20:48.834 "trsvcid": "$NVMF_PORT", 00:20:48.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.834 "hdgst": ${hdgst:-false}, 00:20:48.834 "ddgst": ${ddgst:-false} 00:20:48.834 }, 00:20:48.834 "method": "bdev_nvme_attach_controller" 00:20:48.834 } 00:20:48.834 EOF 00:20:48.834 )") 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.834 { 00:20:48.834 "params": { 00:20:48.834 "name": "Nvme$subsystem", 00:20:48.834 "trtype": "$TEST_TRANSPORT", 00:20:48.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.834 "adrfam": "ipv4", 00:20:48.834 "trsvcid": "$NVMF_PORT", 00:20:48.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.834 "hdgst": ${hdgst:-false}, 00:20:48.834 "ddgst": ${ddgst:-false} 00:20:48.834 }, 00:20:48.834 "method": "bdev_nvme_attach_controller" 00:20:48.834 } 00:20:48.834 EOF 00:20:48.834 )") 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.834 { 00:20:48.834 "params": { 00:20:48.834 "name": "Nvme$subsystem", 00:20:48.834 "trtype": "$TEST_TRANSPORT", 00:20:48.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.834 "adrfam": "ipv4", 00:20:48.834 "trsvcid": "$NVMF_PORT", 00:20:48.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.834 "hdgst": ${hdgst:-false}, 00:20:48.834 "ddgst": ${ddgst:-false} 00:20:48.834 }, 00:20:48.834 "method": "bdev_nvme_attach_controller" 00:20:48.834 } 00:20:48.834 EOF 00:20:48.834 )") 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.834 { 00:20:48.834 "params": { 00:20:48.834 "name": "Nvme$subsystem", 00:20:48.834 "trtype": "$TEST_TRANSPORT", 00:20:48.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.834 "adrfam": "ipv4", 00:20:48.834 "trsvcid": "$NVMF_PORT", 00:20:48.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.834 "hdgst": ${hdgst:-false}, 00:20:48.834 "ddgst": ${ddgst:-false} 00:20:48.834 }, 00:20:48.834 "method": "bdev_nvme_attach_controller" 00:20:48.834 } 00:20:48.834 EOF 00:20:48.834 )") 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.834 { 00:20:48.834 "params": { 00:20:48.834 "name": "Nvme$subsystem", 00:20:48.834 "trtype": "$TEST_TRANSPORT", 00:20:48.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.834 "adrfam": "ipv4", 00:20:48.834 "trsvcid": "$NVMF_PORT", 00:20:48.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.834 "hdgst": ${hdgst:-false}, 00:20:48.834 "ddgst": ${ddgst:-false} 00:20:48.834 }, 00:20:48.834 "method": "bdev_nvme_attach_controller" 00:20:48.834 } 00:20:48.834 EOF 00:20:48.834 )") 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.834 { 00:20:48.834 "params": { 00:20:48.834 "name": "Nvme$subsystem", 00:20:48.834 "trtype": "$TEST_TRANSPORT", 00:20:48.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.834 "adrfam": "ipv4", 00:20:48.834 "trsvcid": "$NVMF_PORT", 00:20:48.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.834 "hdgst": ${hdgst:-false}, 00:20:48.834 "ddgst": ${ddgst:-false} 00:20:48.834 }, 00:20:48.834 "method": "bdev_nvme_attach_controller" 00:20:48.834 } 00:20:48.834 EOF 00:20:48.834 )") 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.834 { 00:20:48.834 "params": { 00:20:48.834 "name": "Nvme$subsystem", 00:20:48.834 "trtype": "$TEST_TRANSPORT", 00:20:48.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.834 "adrfam": "ipv4", 00:20:48.834 "trsvcid": "$NVMF_PORT", 00:20:48.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.834 "hdgst": ${hdgst:-false}, 00:20:48.834 "ddgst": ${ddgst:-false} 00:20:48.834 }, 00:20:48.834 "method": "bdev_nvme_attach_controller" 00:20:48.834 } 00:20:48.834 EOF 00:20:48.834 )") 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.834 [2024-12-12 10:34:22.832972] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:20:48.834 [2024-12-12 10:34:22.833017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573840 ] 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.834 { 00:20:48.834 "params": { 00:20:48.834 "name": "Nvme$subsystem", 00:20:48.834 "trtype": "$TEST_TRANSPORT", 00:20:48.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.834 "adrfam": "ipv4", 00:20:48.834 "trsvcid": "$NVMF_PORT", 00:20:48.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.834 "hdgst": ${hdgst:-false}, 00:20:48.834 "ddgst": ${ddgst:-false} 00:20:48.834 }, 00:20:48.834 "method": "bdev_nvme_attach_controller" 00:20:48.834 } 00:20:48.834 EOF 00:20:48.834 )") 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.834 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.834 { 00:20:48.834 "params": { 00:20:48.834 "name": "Nvme$subsystem", 00:20:48.834 "trtype": "$TEST_TRANSPORT", 00:20:48.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.834 "adrfam": "ipv4", 00:20:48.834 "trsvcid": "$NVMF_PORT", 00:20:48.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.835 "hdgst": ${hdgst:-false}, 00:20:48.835 "ddgst": ${ddgst:-false} 00:20:48.835 }, 00:20:48.835 "method": "bdev_nvme_attach_controller" 00:20:48.835 } 00:20:48.835 EOF 00:20:48.835 )") 00:20:48.835 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.835 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.835 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.835 { 00:20:48.835 "params": { 00:20:48.835 "name": "Nvme$subsystem", 00:20:48.835 "trtype": "$TEST_TRANSPORT", 00:20:48.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.835 "adrfam": "ipv4", 00:20:48.835 "trsvcid": "$NVMF_PORT", 00:20:48.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.835 "hdgst": ${hdgst:-false}, 00:20:48.835 "ddgst": ${ddgst:-false} 00:20:48.835 }, 00:20:48.835 "method": "bdev_nvme_attach_controller" 00:20:48.835 } 00:20:48.835 EOF 00:20:48.835 )") 00:20:48.835 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:48.835 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:49.093 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:49.093 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme1", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 },{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme2", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 },{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme3", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 },{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme4", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 },{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme5", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 },{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme6", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 },{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme7", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 },{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme8", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 },{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme9", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 },{ 00:20:49.093 "params": { 00:20:49.093 "name": "Nvme10", 00:20:49.093 "trtype": "tcp", 00:20:49.093 "traddr": "10.0.0.2", 00:20:49.093 "adrfam": "ipv4", 00:20:49.093 "trsvcid": "4420", 00:20:49.093 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:49.093 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:49.093 "hdgst": false, 00:20:49.093 "ddgst": false 00:20:49.093 }, 00:20:49.093 "method": "bdev_nvme_attach_controller" 00:20:49.093 }' 00:20:49.094 [2024-12-12 10:34:22.908063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.094 [2024-12-12 10:34:22.949091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.992 Running I/O for 10 seconds... 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:50.992 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1573840 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1573840 ']' 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1573840 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.251 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573840 00:20:51.252 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.252 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.252 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573840' 00:20:51.252 killing process with pid 1573840 00:20:51.252 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1573840 00:20:51.252 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1573840 00:20:51.252 Received shutdown signal, test time was about 0.665461 seconds 00:20:51.252 00:20:51.252 Latency(us) 00:20:51.252 [2024-12-12T09:34:25.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme1n1 : 0.65 296.51 18.53 0.00 0.00 212190.19 14605.17 212711.13 00:20:51.252 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme2n1 : 0.65 306.77 19.17 0.00 0.00 196357.24 12295.80 194735.54 00:20:51.252 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme3n1 : 0.64 308.69 19.29 0.00 0.00 192918.72 3464.05 205720.62 00:20:51.252 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme4n1 : 0.66 296.46 18.53 0.00 0.00 195742.78 4837.18 225693.50 00:20:51.252 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme5n1 : 0.66 290.79 18.17 0.00 0.00 196071.29 20222.54 189742.32 00:20:51.252 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme6n1 : 0.66 288.81 18.05 0.00 0.00 192432.68 16602.45 218702.99 00:20:51.252 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme7n1 : 0.66 292.00 18.25 0.00 0.00 183893.50 16103.13 196732.83 00:20:51.252 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme8n1 : 0.65 294.49 18.41 0.00 0.00 177448.47 27587.54 176759.95 00:20:51.252 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme9n1 : 0.63 203.56 12.72 0.00 0.00 247593.45 31207.62 229688.08 00:20:51.252 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.252 Nvme10n1 : 0.63 202.98 12.69 0.00 0.00 240376.69 16477.62 234681.30 00:20:51.252 [2024-12-12T09:34:25.275Z] =================================================================================================================== 00:20:51.252 [2024-12-12T09:34:25.275Z] Total : 2781.05 173.82 0.00 0.00 200553.93 3464.05 234681.30 00:20:51.509 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:52.440 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1573575 00:20:52.440 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:52.440 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:52.440 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:52.440 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:52.440 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:52.440 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:52.441 rmmod nvme_tcp 00:20:52.441 rmmod nvme_fabrics 00:20:52.441 rmmod nvme_keyring 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1573575 ']' 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1573575 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1573575 ']' 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1573575 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.441 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573575 00:20:52.698 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:52.698 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:52.698 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573575' 00:20:52.698 killing process with pid 1573575 00:20:52.698 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1573575 00:20:52.698 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1573575 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.957 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.490 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:55.490 00:20:55.490 real 0m7.415s 00:20:55.490 user 0m21.539s 00:20:55.490 sys 0m1.255s 00:20:55.490 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.490 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.490 ************************************ 00:20:55.490 END TEST nvmf_shutdown_tc2 00:20:55.490 ************************************ 00:20:55.490 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:55.490 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:55.490 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.490 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:55.490 ************************************ 00:20:55.490 START TEST nvmf_shutdown_tc3 00:20:55.490 ************************************ 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:55.490 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:55.490 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:55.490 Found net devices under 0000:af:00.0: cvl_0_0 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:55.490 Found net devices under 0000:af:00.1: cvl_0_1 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:55.490 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:55.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:20:55.491 00:20:55.491 --- 10.0.0.2 ping statistics --- 00:20:55.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.491 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:20:55.491 00:20:55.491 --- 10.0.0.1 ping statistics --- 00:20:55.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.491 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1574975 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1574975 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1574975 ']' 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.491 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.491 [2024-12-12 10:34:29.389751] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:20:55.491 [2024-12-12 10:34:29.389793] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.491 [2024-12-12 10:34:29.467485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.491 [2024-12-12 10:34:29.507767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.491 [2024-12-12 10:34:29.507803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.491 [2024-12-12 10:34:29.507810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.491 [2024-12-12 10:34:29.507817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.491 [2024-12-12 10:34:29.507822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.491 [2024-12-12 10:34:29.509252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.491 [2024-12-12 10:34:29.509294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.491 [2024-12-12 10:34:29.509326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.491 [2024-12-12 10:34:29.509326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.749 [2024-12-12 10:34:29.645305] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.749 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.750 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.750 Malloc1 00:20:55.750 [2024-12-12 10:34:29.747682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.750 Malloc2 00:20:56.007 Malloc3 00:20:56.007 Malloc4 00:20:56.007 Malloc5 00:20:56.007 Malloc6 00:20:56.007 Malloc7 00:20:56.265 Malloc8 00:20:56.265 Malloc9 00:20:56.265 Malloc10 00:20:56.265 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.265 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:56.265 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.265 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1575133 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1575133 /var/tmp/bdevperf.sock 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1575133 ']' 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 [2024-12-12 10:34:30.233401] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:20:56.266 [2024-12-12 10:34:30.233459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575133 ] 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:56.266 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme1", 00:20:56.266 "trtype": "tcp", 00:20:56.266 "traddr": "10.0.0.2", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "4420", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.266 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme2", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme3", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme4", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme5", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme6", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme7", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme8", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme9", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme10", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 }' 00:20:56.525 [2024-12-12 10:34:30.310480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.525 [2024-12-12 10:34:30.351046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.897 Running I/O for 10 seconds... 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.897 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.155 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:58.155 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:58.155 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:58.413 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:58.687 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:58.687 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:58.687 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:58.687 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:58.687 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.687 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.687 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1574975 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1574975 ']' 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1574975 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574975 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574975' 00:20:58.688 killing process with pid 1574975 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1574975 00:20:58.688 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1574975 00:20:58.688 [2024-12-12 10:34:32.594444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.594887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575840 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.596046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.596077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.596085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.596092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.596100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.596107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.596116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.688 [2024-12-12 10:34:32.596123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.596435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5783d0 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.689 [2024-12-12 10:34:32.598736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.598913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575d10 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.690 [2024-12-12 10:34:32.601849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.601855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.601861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.601867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5761e0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.602995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.603085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5766d0 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.691 [2024-12-12 10:34:32.605728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.605985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577a30 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.692 [2024-12-12 10:34:32.606753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.606899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577f00 is same with the state(6) to be set 00:20:58.693 [2024-12-12 10:34:32.612910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.612943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.612959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.612967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.612976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.612983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.612992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.612999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.693 [2024-12-12 10:34:32.613372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.693 [2024-12-12 10:34:32.613380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.613885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.613914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:58.694 [2024-12-12 10:34:32.616036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.616058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.616072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.616079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.616087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.616094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.616102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.616109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.616117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.694 [2024-12-12 10:34:32.616123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.694 [2024-12-12 10:34:32.616132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.695 [2024-12-12 10:34:32.616722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.695 [2024-12-12 10:34:32.616730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.616988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.616996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.696 [2024-12-12 10:34:32.617003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:58.696 [2024-12-12 10:34:32.617396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ad10 is same with the state(6) to be set 00:20:58.696 [2024-12-12 10:34:32.617494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9ed0 is same with the state(6) to be set 00:20:58.696 [2024-12-12 10:34:32.617588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ce350 is same with the state(6) to be set 00:20:58.696 [2024-12-12 10:34:32.617665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.696 [2024-12-12 10:34:32.617695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.696 [2024-12-12 10:34:32.617704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8610 is same with the state(6) to be set 00:20:58.697 [2024-12-12 10:34:32.617744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ab30 is same with the state(6) to be set 00:20:58.697 [2024-12-12 10:34:32.617829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c81a0 is same with the state(6) to be set 00:20:58.697 [2024-12-12 10:34:32.617910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.617962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.617968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12974d0 is same with the state(6) to be set 00:20:58.697 [2024-12-12 10:34:32.617991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12972d0 is same with the state(6) to be set 00:20:58.697 [2024-12-12 10:34:32.618073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a3490 is same with the state(6) to be set 00:20:58.697 [2024-12-12 10:34:32.618150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.697 [2024-12-12 10:34:32.618198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12996c0 is same with the state(6) to be set 00:20:58.697 [2024-12-12 10:34:32.618455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.697 [2024-12-12 10:34:32.618648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.697 [2024-12-12 10:34:32.618656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.618805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.618811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.623971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.623980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.623990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.623996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.698 [2024-12-12 10:34:32.624388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.698 [2024-12-12 10:34:32.624395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.624574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.624582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a8f50 is same with the state(6) to be set 00:20:58.699 [2024-12-12 10:34:32.626465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.699 [2024-12-12 10:34:32.626892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.699 [2024-12-12 10:34:32.626898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.626906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.626913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.626921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.626927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.626935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.626942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.626950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.626957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.626965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.626971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.626979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.626986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.626994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.700 [2024-12-12 10:34:32.627427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.700 [2024-12-12 10:34:32.627588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:58.700 [2024-12-12 10:34:32.627614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ce350 (9): Bad file descriptor 00:20:58.700 [2024-12-12 10:34:32.627654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ad10 (9): Bad file descriptor 00:20:58.700 [2024-12-12 10:34:32.627666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f9ed0 (9): Bad file descriptor 00:20:58.700 [2024-12-12 10:34:32.627679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b8610 (9): Bad file descriptor 00:20:58.700 [2024-12-12 10:34:32.627692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ab30 (9): Bad file descriptor 00:20:58.700 [2024-12-12 10:34:32.627703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c81a0 (9): Bad file descriptor 00:20:58.700 [2024-12-12 10:34:32.627715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12974d0 (9): Bad file descriptor 00:20:58.700 [2024-12-12 10:34:32.627728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12972d0 (9): Bad file descriptor 00:20:58.701 [2024-12-12 10:34:32.627742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a3490 (9): Bad file descriptor 00:20:58.701 [2024-12-12 10:34:32.627756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12996c0 (9): Bad file descriptor 00:20:58.701 [2024-12-12 10:34:32.629851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:58.701 [2024-12-12 10:34:32.630683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:58.701 [2024-12-12 10:34:32.630713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:58.701 [2024-12-12 10:34:32.630864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-12-12 10:34:32.630880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ce350 with addr=10.0.0.2, port=4420 00:20:58.701 [2024-12-12 10:34:32.630888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ce350 is same with the state(6) to be set 00:20:58.701 [2024-12-12 10:34:32.631040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-12-12 10:34:32.631050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b8610 with addr=10.0.0.2, port=4420 00:20:58.701 [2024-12-12 10:34:32.631057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8610 is same with the state(6) to be set 00:20:58.701 [2024-12-12 10:34:32.631109] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.701 [2024-12-12 10:34:32.631659] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.701 [2024-12-12 10:34:32.632044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-12-12 10:34:32.632061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c81a0 with addr=10.0.0.2, port=4420 00:20:58.701 [2024-12-12 10:34:32.632069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c81a0 is same with the state(6) to be set 00:20:58.701 [2024-12-12 10:34:32.632239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-12-12 10:34:32.632250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171ad10 with addr=10.0.0.2, port=4420 00:20:58.701 [2024-12-12 10:34:32.632257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ad10 is same with the state(6) to be set 00:20:58.701 [2024-12-12 10:34:32.632267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ce350 (9): Bad file descriptor 00:20:58.701 [2024-12-12 10:34:32.632277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b8610 (9): Bad file descriptor 00:20:58.701 [2024-12-12 10:34:32.632343] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.701 [2024-12-12 10:34:32.632389] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.701 [2024-12-12 10:34:32.632431] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.701 [2024-12-12 10:34:32.632483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.701 [2024-12-12 10:34:32.632878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.701 [2024-12-12 10:34:32.632884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.632892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.632899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.632907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.632914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.632922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.632929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.632937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.632943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.632951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.632958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.632966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.632972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.632981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.632987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.632998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f2530 is same with the state(6) to be set 00:20:58.702 [2024-12-12 10:34:32.633554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.702 [2024-12-12 10:34:32.633564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.702 [2024-12-12 10:34:32.633581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.633988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.633994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.703 [2024-12-12 10:34:32.634164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.703 [2024-12-12 10:34:32.634170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.634511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.704 [2024-12-12 10:34:32.634519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e83e0 is same with the state(6) to be set 00:20:58.704 [2024-12-12 10:34:32.634592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c81a0 (9): Bad file descriptor 00:20:58.704 [2024-12-12 10:34:32.634604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ad10 (9): Bad file descriptor 00:20:58.704 [2024-12-12 10:34:32.634612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:58.704 [2024-12-12 10:34:32.634619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:58.704 [2024-12-12 10:34:32.634627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:58.704 [2024-12-12 10:34:32.634635] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:58.704 [2024-12-12 10:34:32.634643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:58.704 [2024-12-12 10:34:32.634650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:58.704 [2024-12-12 10:34:32.634656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:58.704 [2024-12-12 10:34:32.634662] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:58.704 [2024-12-12 10:34:32.636497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:58.704 [2024-12-12 10:34:32.636519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:58.704 [2024-12-12 10:34:32.636544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:58.704 [2024-12-12 10:34:32.636552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:58.704 [2024-12-12 10:34:32.636559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:58.704 [2024-12-12 10:34:32.636566] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:58.704 [2024-12-12 10:34:32.636578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:58.704 [2024-12-12 10:34:32.636583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:58.704 [2024-12-12 10:34:32.636590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:58.704 [2024-12-12 10:34:32.636597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:58.704 [2024-12-12 10:34:32.636866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-12-12 10:34:32.636880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171ab30 with addr=10.0.0.2, port=4420 00:20:58.704 [2024-12-12 10:34:32.636888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ab30 is same with the state(6) to be set 00:20:58.704 [2024-12-12 10:34:32.637130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-12-12 10:34:32.637141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f9ed0 with addr=10.0.0.2, port=4420 00:20:58.704 [2024-12-12 10:34:32.637148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9ed0 is same with the state(6) to be set 00:20:58.704 [2024-12-12 10:34:32.637641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ab30 (9): Bad file descriptor 00:20:58.704 [2024-12-12 10:34:32.637674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f9ed0 (9): Bad file descriptor 00:20:58.704 [2024-12-12 10:34:32.637755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:58.704 [2024-12-12 10:34:32.637764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:58.704 [2024-12-12 10:34:32.637772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:58.704 [2024-12-12 10:34:32.637779] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:58.704 [2024-12-12 10:34:32.637787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:58.704 [2024-12-12 10:34:32.637793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:58.704 [2024-12-12 10:34:32.637800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:58.704 [2024-12-12 10:34:32.637806] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:58.704 [2024-12-12 10:34:32.637855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.704 [2024-12-12 10:34:32.637864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.637875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.637886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.637895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.637902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.637910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.637917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.637926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.637932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.637941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.637947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.637955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.637962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.637970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.637976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.637985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.637991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.637999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.705 [2024-12-12 10:34:32.638403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.705 [2024-12-12 10:34:32.638412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.638828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.638837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a73b0 is same with the state(6) to be set 00:20:58.706 [2024-12-12 10:34:32.639818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.639989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.639995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.706 [2024-12-12 10:34:32.640006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.706 [2024-12-12 10:34:32.640013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.707 [2024-12-12 10:34:32.640610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.707 [2024-12-12 10:34:32.640619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.640774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.640781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8580 is same with the state(6) to be set 00:20:58.708 [2024-12-12 10:34:32.641767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.641990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.641998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.708 [2024-12-12 10:34:32.642154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.708 [2024-12-12 10:34:32.642162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.709 [2024-12-12 10:34:32.642660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.709 [2024-12-12 10:34:32.642668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.642677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.642686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.642693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.642701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.642707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.642715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.642722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.642729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ab80 is same with the state(6) to be set 00:20:58.710 [2024-12-12 10:34:32.643711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.643986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.643994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.644009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.644023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.644041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.644056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.644071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.644085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.644100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.644115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.710 [2024-12-12 10:34:32.644130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.710 [2024-12-12 10:34:32.644136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.711 [2024-12-12 10:34:32.644682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.711 [2024-12-12 10:34:32.644690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a7c90 is same with the state(6) to be set 00:20:58.712 [2024-12-12 10:34:32.645669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:58.712 [2024-12-12 10:34:32.645688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:58.712 [2024-12-12 10:34:32.645700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:58.712 task offset: 29440 on job bdev=Nvme6n1 fails 00:20:58.712 00:20:58.712 Latency(us) 00:20:58.712 [2024-12-12T09:34:32.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.712 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme1n1 ended in about 0.92 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme1n1 : 0.92 209.69 13.11 69.90 0.00 226683.86 16852.11 196732.83 00:20:58.712 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme2n1 ended in about 0.92 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme2n1 : 0.92 213.61 13.35 69.75 0.00 219817.83 19848.05 203723.34 00:20:58.712 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme3n1 ended in about 0.92 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme3n1 : 0.92 208.81 13.05 69.60 0.00 219917.78 14480.34 217704.35 00:20:58.712 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme4n1 ended in about 0.92 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme4n1 : 0.92 208.36 13.02 69.45 0.00 216557.59 14417.92 218702.99 00:20:58.712 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme5n1 ended in about 0.90 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme5n1 : 0.90 212.21 13.26 70.74 0.00 208462.99 15666.22 230686.72 00:20:58.712 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme6n1 ended in about 0.90 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme6n1 : 0.90 212.99 13.31 71.00 0.00 203763.81 11983.73 218702.99 00:20:58.712 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme7n1 ended in about 0.90 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme7n1 : 0.90 287.04 17.94 70.93 0.00 158561.76 10048.85 197731.47 00:20:58.712 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme8n1 ended in about 0.91 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme8n1 : 0.91 215.03 13.44 70.21 0.00 195478.76 16352.79 214708.42 00:20:58.712 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme9n1 ended in about 0.91 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme9n1 : 0.91 211.99 13.25 70.66 0.00 193240.02 11858.90 215707.06 00:20:58.712 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.712 Job: Nvme10n1 ended in about 0.91 seconds with error 00:20:58.712 Verification LBA range: start 0x0 length 0x400 00:20:58.712 Nvme10n1 : 0.91 145.77 9.11 70.14 0.00 248308.09 19723.22 237677.23 00:20:58.712 [2024-12-12T09:34:32.735Z] =================================================================================================================== 00:20:58.712 [2024-12-12T09:34:32.735Z] Total : 2125.48 132.84 702.38 0.00 206862.39 10048.85 237677.23 00:20:58.712 [2024-12-12 10:34:32.678758] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:58.712 [2024-12-12 10:34:32.678809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:58.712 [2024-12-12 10:34:32.679281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-12-12 10:34:32.679304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a3490 with addr=10.0.0.2, port=4420 00:20:58.712 [2024-12-12 10:34:32.679316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a3490 is same with the state(6) to be set 00:20:58.712 [2024-12-12 10:34:32.679538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-12-12 10:34:32.679550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12974d0 with addr=10.0.0.2, port=4420 00:20:58.712 [2024-12-12 10:34:32.679558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12974d0 is same with the state(6) to be set 00:20:58.712 [2024-12-12 10:34:32.679800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-12-12 10:34:32.679812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12996c0 with addr=10.0.0.2, port=4420 00:20:58.712 [2024-12-12 10:34:32.679820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12996c0 is same with the state(6) to be set 00:20:58.712 [2024-12-12 10:34:32.680041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-12-12 10:34:32.680052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12972d0 with addr=10.0.0.2, port=4420 00:20:58.712 [2024-12-12 10:34:32.680061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12972d0 is same with the state(6) to be set 00:20:58.712 [2024-12-12 10:34:32.680100] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:58.712 [2024-12-12 10:34:32.680112] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:20:58.712 [2024-12-12 10:34:32.680124] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:20:58.712 [2024-12-12 10:34:32.680135] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:20:58.712 [2024-12-12 10:34:32.681127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:58.712 [2024-12-12 10:34:32.681144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:58.712 [2024-12-12 10:34:32.681153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:58.712 [2024-12-12 10:34:32.681162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:58.712 [2024-12-12 10:34:32.681219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a3490 (9): Bad file descriptor 00:20:58.712 [2024-12-12 10:34:32.681233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12974d0 (9): Bad file descriptor 00:20:58.712 [2024-12-12 10:34:32.681243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12996c0 (9): Bad file descriptor 00:20:58.712 [2024-12-12 10:34:32.681252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12972d0 (9): Bad file descriptor 00:20:58.712 [2024-12-12 10:34:32.681306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:58.712 [2024-12-12 10:34:32.681318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:58.712 [2024-12-12 10:34:32.681558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-12-12 10:34:32.681577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b8610 with addr=10.0.0.2, port=4420 00:20:58.712 [2024-12-12 10:34:32.681586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8610 is same with the state(6) to be set 00:20:58.712 [2024-12-12 10:34:32.681731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-12-12 10:34:32.681743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ce350 with addr=10.0.0.2, port=4420 00:20:58.712 [2024-12-12 10:34:32.681751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ce350 is same with the state(6) to be set 00:20:58.712 [2024-12-12 10:34:32.681960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-12-12 10:34:32.681970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171ad10 with addr=10.0.0.2, port=4420 00:20:58.712 [2024-12-12 10:34:32.681977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ad10 is same with the state(6) to be set 00:20:58.712 [2024-12-12 10:34:32.682128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-12-12 10:34:32.682139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c81a0 with addr=10.0.0.2, port=4420 00:20:58.712 [2024-12-12 10:34:32.682146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c81a0 is same with the state(6) to be set 00:20:58.712 [2024-12-12 10:34:32.682153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:58.712 [2024-12-12 10:34:32.682159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:58.712 [2024-12-12 10:34:32.682167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:58.712 [2024-12-12 10:34:32.682175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:58.712 [2024-12-12 10:34:32.682183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:58.712 [2024-12-12 10:34:32.682188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:58.712 [2024-12-12 10:34:32.682194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:58.712 [2024-12-12 10:34:32.682203] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:58.713 [2024-12-12 10:34:32.682211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:58.713 [2024-12-12 10:34:32.682217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:58.713 [2024-12-12 10:34:32.682223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:58.713 [2024-12-12 10:34:32.682229] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:58.713 [2024-12-12 10:34:32.682235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:58.713 [2024-12-12 10:34:32.682241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:58.713 [2024-12-12 10:34:32.682247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:58.713 [2024-12-12 10:34:32.682253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:58.713 [2024-12-12 10:34:32.682526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-12-12 10:34:32.682537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f9ed0 with addr=10.0.0.2, port=4420 00:20:58.713 [2024-12-12 10:34:32.682544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9ed0 is same with the state(6) to be set 00:20:58.713 [2024-12-12 10:34:32.682704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-12-12 10:34:32.682714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171ab30 with addr=10.0.0.2, port=4420 00:20:58.713 [2024-12-12 10:34:32.682721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ab30 is same with the state(6) to be set 00:20:58.713 [2024-12-12 10:34:32.682729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b8610 (9): Bad file descriptor 00:20:58.713 [2024-12-12 10:34:32.682739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ce350 (9): Bad file descriptor 00:20:58.713 [2024-12-12 10:34:32.682747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ad10 (9): Bad file descriptor 00:20:58.713 [2024-12-12 10:34:32.682755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c81a0 (9): Bad file descriptor 00:20:58.713 [2024-12-12 10:34:32.682784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f9ed0 (9): Bad file descriptor 00:20:58.713 [2024-12-12 10:34:32.682794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ab30 (9): Bad file descriptor 00:20:58.713 [2024-12-12 10:34:32.682801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:58.713 [2024-12-12 10:34:32.682808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:58.713 [2024-12-12 10:34:32.682814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:58.713 [2024-12-12 10:34:32.682820] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:58.713 [2024-12-12 10:34:32.682827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:58.713 [2024-12-12 10:34:32.682833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:58.713 [2024-12-12 10:34:32.682840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:58.713 [2024-12-12 10:34:32.682845] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:58.713 [2024-12-12 10:34:32.682855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:58.713 [2024-12-12 10:34:32.682860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:58.713 [2024-12-12 10:34:32.682867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:58.713 [2024-12-12 10:34:32.682873] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:58.713 [2024-12-12 10:34:32.682880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:58.713 [2024-12-12 10:34:32.682886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:58.713 [2024-12-12 10:34:32.682892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:58.713 [2024-12-12 10:34:32.682898] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:58.713 [2024-12-12 10:34:32.682920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:58.713 [2024-12-12 10:34:32.682927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:58.713 [2024-12-12 10:34:32.682933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:58.713 [2024-12-12 10:34:32.682938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:58.713 [2024-12-12 10:34:32.682946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:58.713 [2024-12-12 10:34:32.682953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:58.713 [2024-12-12 10:34:32.682959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:58.713 [2024-12-12 10:34:32.682965] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:59.280 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:00.217 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1575133 00:21:00.217 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:00.217 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1575133 00:21:00.217 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1575133 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:00.217 rmmod nvme_tcp 00:21:00.217 rmmod nvme_fabrics 00:21:00.217 rmmod nvme_keyring 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1574975 ']' 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1574975 00:21:00.217 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1574975 ']' 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1574975 00:21:00.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1574975) - No such process 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1574975 is not found' 00:21:00.218 Process with pid 1574975 is not found 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.218 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:02.754 00:21:02.754 real 0m7.148s 00:21:02.754 user 0m16.378s 00:21:02.754 sys 0m1.329s 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.754 ************************************ 00:21:02.754 END TEST nvmf_shutdown_tc3 00:21:02.754 ************************************ 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:02.754 ************************************ 00:21:02.754 START TEST nvmf_shutdown_tc4 00:21:02.754 ************************************ 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:02.754 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:02.754 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.754 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:02.755 Found net devices under 0000:af:00.0: cvl_0_0 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:02.755 Found net devices under 0000:af:00.1: cvl_0_1 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:02.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:21:02.755 00:21:02.755 --- 10.0.0.2 ping statistics --- 00:21:02.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.755 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:21:02.755 00:21:02.755 --- 10.0.0.1 ping statistics --- 00:21:02.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.755 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1576371 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1576371 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1576371 ']' 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.755 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:02.755 [2024-12-12 10:34:36.628867] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:21:02.755 [2024-12-12 10:34:36.628908] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.755 [2024-12-12 10:34:36.706946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.755 [2024-12-12 10:34:36.747001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.755 [2024-12-12 10:34:36.747037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.755 [2024-12-12 10:34:36.747044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.755 [2024-12-12 10:34:36.747050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.755 [2024-12-12 10:34:36.747055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.755 [2024-12-12 10:34:36.748507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.755 [2024-12-12 10:34:36.748625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.755 [2024-12-12 10:34:36.748683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.755 [2024-12-12 10:34:36.748684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:03.014 [2024-12-12 10:34:36.892989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.014 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:03.014 Malloc1 00:21:03.014 [2024-12-12 10:34:36.998960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.014 Malloc2 00:21:03.272 Malloc3 00:21:03.272 Malloc4 00:21:03.272 Malloc5 00:21:03.272 Malloc6 00:21:03.272 Malloc7 00:21:03.272 Malloc8 00:21:03.530 Malloc9 00:21:03.530 Malloc10 00:21:03.530 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.530 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:03.530 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.530 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:03.530 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1576428 00:21:03.530 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:03.530 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:03.530 [2024-12-12 10:34:37.502063] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1576371 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1576371 ']' 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1576371 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1576371 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1576371' 00:21:08.804 killing process with pid 1576371 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1576371 00:21:08.804 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1576371 00:21:08.804 [2024-12-12 10:34:42.495397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54200 is same with the state(6) to be set 00:21:08.804 [2024-12-12 10:34:42.495455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54200 is same with the state(6) to be set 00:21:08.804 [2024-12-12 10:34:42.495463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54200 is same with the state(6) to be set 00:21:08.804 [2024-12-12 10:34:42.495476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54200 is same with the state(6) to be set 00:21:08.804 [2024-12-12 10:34:42.495482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54200 is same with the state(6) to be set 00:21:08.804 [2024-12-12 10:34:42.495488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54200 is same with the state(6) to be set 00:21:08.804 [2024-12-12 10:34:42.495494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54200 is same with the state(6) to be set 00:21:08.804 [2024-12-12 10:34:42.495500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54200 is same with the state(6) to be set 00:21:08.804 [2024-12-12 10:34:42.495979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b546f0 is same with the state(6) to be set 00:21:08.804 [2024-12-12 10:34:42.496011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b546f0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b546f0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b546f0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b546f0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b546f0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54bc0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54bc0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54bc0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54bc0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54bc0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.496934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b54bc0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.497483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53d30 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.497508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53d30 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.497516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53d30 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.497522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53d30 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.497529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53d30 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.497536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53d30 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.497542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53d30 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.497548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53d30 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b568a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b568a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b568a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b568a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b568a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b568a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b568a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b568a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56d70 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.499978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56d70 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.500392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d422a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.500414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d422a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.500422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d422a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.500429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d422a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.500435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d422a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.500442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d422a0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.501466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b563d0 is same with the state(6) to be set 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 [2024-12-12 10:34:42.506541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.805 [2024-12-12 10:34:42.506595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d452c0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.506617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d452c0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.506625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d452c0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.506631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d452c0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.506637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d452c0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.506644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d452c0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.506650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d452c0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.506656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d452c0 is same with the state(6) to be set 00:21:08.805 [2024-12-12 10:34:42.506662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d452c0 is same with the state(6) to be set 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.805 Write completed with error (sct=0, sc=8) 00:21:08.805 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 [2024-12-12 10:34:42.507124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45790 is same with the state(6) to be set 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 [2024-12-12 10:34:42.507144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45790 is same with the state(6) to be set 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 [2024-12-12 10:34:42.507153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45790 is same with the state(6) to be set 00:21:08.806 starting I/O failed: -6 00:21:08.806 [2024-12-12 10:34:42.507160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45790 is same with the state(6) to be set 00:21:08.806 [2024-12-12 10:34:42.507167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45790 is same with the state(6) to be set 00:21:08.806 [2024-12-12 10:34:42.507173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45790 is same with Write completed with error (sct=0, sc=8) 00:21:08.806 the state(6) to be set 00:21:08.806 starting I/O failed: -6 00:21:08.806 [2024-12-12 10:34:42.507180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45790 is same with the state(6) to be set 00:21:08.806 [2024-12-12 10:34:42.507187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45790 is same with the state(6) to be set 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 [2024-12-12 10:34:42.507484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:08.806 [2024-12-12 10:34:42.507507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45c60 is same with the state(6) to be set 00:21:08.806 [2024-12-12 10:34:42.507532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45c60 is same with the state(6) to be set 00:21:08.806 [2024-12-12 10:34:42.507539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45c60 is same with the state(6) to be set 00:21:08.806 [2024-12-12 10:34:42.507545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45c60 is same with the state(6) to be set 00:21:08.806 [2024-12-12 10:34:42.507552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45c60 is same with the state(6) to be set 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 [2024-12-12 10:34:42.508004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44df0 is same with starting I/O failed: -6 00:21:08.806 the state(6) to be set 00:21:08.806 [2024-12-12 10:34:42.508027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44df0 is same with the state(6) to be set 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 [2024-12-12 10:34:42.508035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44df0 is same with the state(6) to be set 00:21:08.806 [2024-12-12 10:34:42.508042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44df0 is same with the state(6) to be set 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 [2024-12-12 10:34:42.508048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44df0 is same with the state(6) to be set 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 [2024-12-12 10:34:42.508473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.806 starting I/O failed: -6 00:21:08.806 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 [2024-12-12 10:34:42.510017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.807 NVMe io qpair process completion error 00:21:08.807 [2024-12-12 10:34:42.510143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8090 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.510160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8090 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.510167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8090 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.510174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8090 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.510181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8090 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.510187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8090 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.510482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8560 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.510500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8560 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.510508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8560 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.510515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8560 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.510522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8560 is same with starting I/O failed: -6 00:21:08.807 the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 [2024-12-12 10:34:42.510823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50040 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.510839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50040 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.510847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50040 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.510855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50040 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.510861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50040 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.510868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50040 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.510875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50040 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.510881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50040 is same with the state(6) to be set 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.510984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.511249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc7bc0 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 [2024-12-12 10:34:42.511267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc7bc0 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.511275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc7bc0 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.511282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc7bc0 is same with the state(6) to be set 00:21:08.807 [2024-12-12 10:34:42.511288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc7bc0 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 [2024-12-12 10:34:42.511294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc7bc0 is same with the state(6) to be set 00:21:08.807 starting I/O failed: -6 00:21:08.807 [2024-12-12 10:34:42.511301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc7bc0 is same with the state(6) to be set 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 starting I/O failed: -6 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.807 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 [2024-12-12 10:34:42.511885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 [2024-12-12 10:34:42.512854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.808 starting I/O failed: -6 00:21:08.808 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 [2024-12-12 10:34:42.515016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.809 NVMe io qpair process completion error 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 [2024-12-12 10:34:42.515643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b529b0 is same with the state(6) to be set 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 [2024-12-12 10:34:42.515665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b529b0 is same with the state(6) to be set 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 [2024-12-12 10:34:42.515674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b529b0 is same with the state(6) to be set 00:21:08.809 [2024-12-12 10:34:42.515681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b529b0 is same with the state(6) to be set 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 [2024-12-12 10:34:42.515688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b529b0 is same with the state(6) to be set 00:21:08.809 starting I/O failed: -6 00:21:08.809 [2024-12-12 10:34:42.515694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b529b0 is same with the state(6) to be set 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 [2024-12-12 10:34:42.516034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 [2024-12-12 10:34:42.516905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.809 starting I/O failed: -6 00:21:08.809 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 [2024-12-12 10:34:42.517911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:08.810 starting I/O failed: -6 00:21:08.810 starting I/O failed: -6 00:21:08.810 starting I/O failed: -6 00:21:08.810 starting I/O failed: -6 00:21:08.810 starting I/O failed: -6 00:21:08.810 starting I/O failed: -6 00:21:08.810 starting I/O failed: -6 00:21:08.810 starting I/O failed: -6 00:21:08.810 starting I/O failed: -6 00:21:08.810 starting I/O failed: -6 00:21:08.810 NVMe io qpair process completion error 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 [2024-12-12 10:34:42.519407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 [2024-12-12 10:34:42.520280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 starting I/O failed: -6 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.810 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 [2024-12-12 10:34:42.521333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 [2024-12-12 10:34:42.522731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.811 NVMe io qpair process completion error 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 [2024-12-12 10:34:42.523764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 Write completed with error (sct=0, sc=8) 00:21:08.811 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 [2024-12-12 10:34:42.524643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 [2024-12-12 10:34:42.525679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.812 Write completed with error (sct=0, sc=8) 00:21:08.812 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 [2024-12-12 10:34:42.527730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.813 NVMe io qpair process completion error 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 [2024-12-12 10:34:42.529882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.813 Write completed with error (sct=0, sc=8) 00:21:08.813 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 [2024-12-12 10:34:42.530893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 [2024-12-12 10:34:42.535391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.814 NVMe io qpair process completion error 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 [2024-12-12 10:34:42.536372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.814 starting I/O failed: -6 00:21:08.814 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 [2024-12-12 10:34:42.537270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 [2024-12-12 10:34:42.538285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.815 Write completed with error (sct=0, sc=8) 00:21:08.815 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 [2024-12-12 10:34:42.540203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.816 NVMe io qpair process completion error 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 [2024-12-12 10:34:42.541268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 [2024-12-12 10:34:42.542123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.816 starting I/O failed: -6 00:21:08.816 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 [2024-12-12 10:34:42.543162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 [2024-12-12 10:34:42.544953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:08.817 NVMe io qpair process completion error 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 [2024-12-12 10:34:42.546355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 starting I/O failed: -6 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.817 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 [2024-12-12 10:34:42.547294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 [2024-12-12 10:34:42.548290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.818 Write completed with error (sct=0, sc=8) 00:21:08.818 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 [2024-12-12 10:34:42.552354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:08.819 NVMe io qpair process completion error 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 [2024-12-12 10:34:42.553304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 [2024-12-12 10:34:42.554215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.819 starting I/O failed: -6 00:21:08.819 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 [2024-12-12 10:34:42.555226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 starting I/O failed: -6 00:21:08.820 [2024-12-12 10:34:42.559241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:08.820 NVMe io qpair process completion error 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Write completed with error (sct=0, sc=8) 00:21:08.820 Initializing NVMe Controllers 00:21:08.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:08.820 Controller IO queue size 128, less than required. 00:21:08.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:08.820 Controller IO queue size 128, less than required. 00:21:08.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:08.820 Controller IO queue size 128, less than required. 00:21:08.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:08.820 Controller IO queue size 128, less than required. 00:21:08.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.820 Controller IO queue size 128, less than required. 00:21:08.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:08.820 Controller IO queue size 128, less than required. 00:21:08.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:08.821 Controller IO queue size 128, less than required. 00:21:08.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:08.821 Controller IO queue size 128, less than required. 00:21:08.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:08.821 Controller IO queue size 128, less than required. 00:21:08.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:08.821 Controller IO queue size 128, less than required. 00:21:08.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:08.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:08.821 Initialization complete. Launching workers. 00:21:08.821 ======================================================== 00:21:08.821 Latency(us) 00:21:08.821 Device Information : IOPS MiB/s Average min max 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2227.97 95.73 57455.94 858.39 115802.94 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2203.16 94.67 57483.45 718.20 122854.40 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2179.63 93.66 58113.53 848.31 121803.42 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2185.99 93.93 58156.73 912.48 120814.87 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2218.00 95.30 57122.05 875.75 98131.91 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2189.81 94.09 57866.15 727.94 97700.98 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2178.36 93.60 58189.94 914.25 101179.62 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2198.50 94.47 57703.45 694.40 105719.40 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2219.91 95.39 57161.93 903.91 108346.10 00:21:08.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2191.08 94.15 57932.12 839.16 110770.96 00:21:08.821 ======================================================== 00:21:08.821 Total : 21992.41 944.99 57715.91 694.40 122854.40 00:21:08.821 00:21:08.821 [2024-12-12 10:34:42.564869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1515bc0 is same with the state(6) to be set 00:21:08.821 [2024-12-12 10:34:42.564919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517ae0 is same with the state(6) to be set 00:21:08.821 [2024-12-12 10:34:42.564948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1515890 is same with the state(6) to be set 00:21:08.821 [2024-12-12 10:34:42.564976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516740 is same with the state(6) to be set 00:21:08.821 [2024-12-12 10:34:42.565005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517720 is same with the state(6) to be set 00:21:08.821 [2024-12-12 10:34:42.565033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516a70 is same with the state(6) to be set 00:21:08.821 [2024-12-12 10:34:42.565060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517900 is same with the state(6) to be set 00:21:08.821 [2024-12-12 10:34:42.565088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1515ef0 is same with the state(6) to be set 00:21:08.821 [2024-12-12 10:34:42.565115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516410 is same with the state(6) to be set 00:21:08.821 [2024-12-12 10:34:42.565145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1515560 is same with the state(6) to be set 00:21:08.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:09.080 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1576428 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1576428 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1576428 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:10.016 rmmod nvme_tcp 00:21:10.016 rmmod nvme_fabrics 00:21:10.016 rmmod nvme_keyring 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1576371 ']' 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1576371 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1576371 ']' 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1576371 00:21:10.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1576371) - No such process 00:21:10.016 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1576371 is not found' 00:21:10.016 Process with pid 1576371 is not found 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.017 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:12.551 00:21:12.551 real 0m9.798s 00:21:12.551 user 0m24.901s 00:21:12.551 sys 0m5.228s 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.551 ************************************ 00:21:12.551 END TEST nvmf_shutdown_tc4 00:21:12.551 ************************************ 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:12.551 00:21:12.551 real 0m40.274s 00:21:12.551 user 1m37.923s 00:21:12.551 sys 0m13.866s 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:12.551 ************************************ 00:21:12.551 END TEST nvmf_shutdown 00:21:12.551 ************************************ 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:12.551 ************************************ 00:21:12.551 START TEST nvmf_nsid 00:21:12.551 ************************************ 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:12.551 * Looking for test storage... 00:21:12.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:12.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.551 --rc genhtml_branch_coverage=1 00:21:12.551 --rc genhtml_function_coverage=1 00:21:12.551 --rc genhtml_legend=1 00:21:12.551 --rc geninfo_all_blocks=1 00:21:12.551 --rc geninfo_unexecuted_blocks=1 00:21:12.551 00:21:12.551 ' 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:12.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.551 --rc genhtml_branch_coverage=1 00:21:12.551 --rc genhtml_function_coverage=1 00:21:12.551 --rc genhtml_legend=1 00:21:12.551 --rc geninfo_all_blocks=1 00:21:12.551 --rc geninfo_unexecuted_blocks=1 00:21:12.551 00:21:12.551 ' 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:12.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.551 --rc genhtml_branch_coverage=1 00:21:12.551 --rc genhtml_function_coverage=1 00:21:12.551 --rc genhtml_legend=1 00:21:12.551 --rc geninfo_all_blocks=1 00:21:12.551 --rc geninfo_unexecuted_blocks=1 00:21:12.551 00:21:12.551 ' 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:12.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.551 --rc genhtml_branch_coverage=1 00:21:12.551 --rc genhtml_function_coverage=1 00:21:12.551 --rc genhtml_legend=1 00:21:12.551 --rc geninfo_all_blocks=1 00:21:12.551 --rc geninfo_unexecuted_blocks=1 00:21:12.551 00:21:12.551 ' 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.551 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:12.552 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:19.121 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:19.121 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:19.121 Found net devices under 0000:af:00.0: cvl_0_0 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:19.121 Found net devices under 0000:af:00.1: cvl_0_1 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.121 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:21:19.121 00:21:19.121 --- 10.0.0.2 ping statistics --- 00:21:19.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.121 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:21:19.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:21:19.121 00:21:19.122 --- 10.0.0.1 ping statistics --- 00:21:19.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.122 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1581012 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1581012 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1581012 ']' 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:19.122 [2024-12-12 10:34:52.296340] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:21:19.122 [2024-12-12 10:34:52.296389] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.122 [2024-12-12 10:34:52.375340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.122 [2024-12-12 10:34:52.415774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.122 [2024-12-12 10:34:52.415811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.122 [2024-12-12 10:34:52.415819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.122 [2024-12-12 10:34:52.415826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.122 [2024-12-12 10:34:52.415830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.122 [2024-12-12 10:34:52.416308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1581037 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=acce8bfc-54a5-4bd2-80e6-ac843f33df60 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5e2a38ff-839d-40b2-b2f0-d99275c8848a 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ffcff24b-f711-4568-81d2-d8b2e281d9c0 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:19.122 null0 00:21:19.122 null1 00:21:19.122 [2024-12-12 10:34:52.593757] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:21:19.122 [2024-12-12 10:34:52.593800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581037 ] 00:21:19.122 null2 00:21:19.122 [2024-12-12 10:34:52.603126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.122 [2024-12-12 10:34:52.627314] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1581037 /var/tmp/tgt2.sock 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1581037 ']' 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:19.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:19.122 [2024-12-12 10:34:52.669360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.122 [2024-12-12 10:34:52.712297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:19.122 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:19.380 [2024-12-12 10:34:53.230228] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.380 [2024-12-12 10:34:53.246319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:19.380 nvme0n1 nvme0n2 00:21:19.380 nvme1n1 00:21:19.380 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:19.381 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:19.381 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:20.755 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid acce8bfc-54a5-4bd2-80e6-ac843f33df60 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=acce8bfc54a54bd280e6ac843f33df60 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo ACCE8BFC54A54BD280E6AC843F33DF60 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ ACCE8BFC54A54BD280E6AC843F33DF60 == \A\C\C\E\8\B\F\C\5\4\A\5\4\B\D\2\8\0\E\6\A\C\8\4\3\F\3\3\D\F\6\0 ]] 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5e2a38ff-839d-40b2-b2f0-d99275c8848a 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5e2a38ff839d40b2b2f0d99275c8848a 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5E2A38FF839D40B2B2F0D99275C8848A 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5E2A38FF839D40B2B2F0D99275C8848A == \5\E\2\A\3\8\F\F\8\3\9\D\4\0\B\2\B\2\F\0\D\9\9\2\7\5\C\8\8\4\8\A ]] 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ffcff24b-f711-4568-81d2-d8b2e281d9c0 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ffcff24bf711456881d2d8b2e281d9c0 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FFCFF24BF711456881D2D8B2E281D9C0 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FFCFF24BF711456881D2D8B2E281D9C0 == \F\F\C\F\F\2\4\B\F\7\1\1\4\5\6\8\8\1\D\2\D\8\B\2\E\2\8\1\D\9\C\0 ]] 00:21:21.698 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1581037 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1581037 ']' 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1581037 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1581037 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1581037' 00:21:21.957 killing process with pid 1581037 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1581037 00:21:21.957 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1581037 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:22.216 rmmod nvme_tcp 00:21:22.216 rmmod nvme_fabrics 00:21:22.216 rmmod nvme_keyring 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1581012 ']' 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1581012 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1581012 ']' 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1581012 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.216 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1581012 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1581012' 00:21:22.502 killing process with pid 1581012 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1581012 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1581012 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.502 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.551 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:24.551 00:21:24.551 real 0m12.322s 00:21:24.551 user 0m9.670s 00:21:24.551 sys 0m5.433s 00:21:24.551 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.551 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:24.551 ************************************ 00:21:24.551 END TEST nvmf_nsid 00:21:24.551 ************************************ 00:21:24.551 10:34:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:24.551 00:21:24.551 real 11m57.999s 00:21:24.551 user 25m37.838s 00:21:24.551 sys 3m38.878s 00:21:24.551 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.551 10:34:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.551 ************************************ 00:21:24.551 END TEST nvmf_target_extra 00:21:24.551 ************************************ 00:21:24.551 10:34:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:24.551 10:34:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:24.551 10:34:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.551 10:34:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:24.810 ************************************ 00:21:24.810 START TEST nvmf_host 00:21:24.810 ************************************ 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:24.810 * Looking for test storage... 00:21:24.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:24.810 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.811 --rc genhtml_branch_coverage=1 00:21:24.811 --rc genhtml_function_coverage=1 00:21:24.811 --rc genhtml_legend=1 00:21:24.811 --rc geninfo_all_blocks=1 00:21:24.811 --rc geninfo_unexecuted_blocks=1 00:21:24.811 00:21:24.811 ' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.811 --rc genhtml_branch_coverage=1 00:21:24.811 --rc genhtml_function_coverage=1 00:21:24.811 --rc genhtml_legend=1 00:21:24.811 --rc geninfo_all_blocks=1 00:21:24.811 --rc geninfo_unexecuted_blocks=1 00:21:24.811 00:21:24.811 ' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.811 --rc genhtml_branch_coverage=1 00:21:24.811 --rc genhtml_function_coverage=1 00:21:24.811 --rc genhtml_legend=1 00:21:24.811 --rc geninfo_all_blocks=1 00:21:24.811 --rc geninfo_unexecuted_blocks=1 00:21:24.811 00:21:24.811 ' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.811 --rc genhtml_branch_coverage=1 00:21:24.811 --rc genhtml_function_coverage=1 00:21:24.811 --rc genhtml_legend=1 00:21:24.811 --rc geninfo_all_blocks=1 00:21:24.811 --rc geninfo_unexecuted_blocks=1 00:21:24.811 00:21:24.811 ' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.811 10:34:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.071 ************************************ 00:21:25.071 START TEST nvmf_multicontroller 00:21:25.071 ************************************ 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:25.071 * Looking for test storage... 00:21:25.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.071 10:34:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:25.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.071 --rc genhtml_branch_coverage=1 00:21:25.071 --rc genhtml_function_coverage=1 00:21:25.071 --rc genhtml_legend=1 00:21:25.071 --rc geninfo_all_blocks=1 00:21:25.071 --rc geninfo_unexecuted_blocks=1 00:21:25.071 00:21:25.071 ' 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:25.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.071 --rc genhtml_branch_coverage=1 00:21:25.071 --rc genhtml_function_coverage=1 00:21:25.071 --rc genhtml_legend=1 00:21:25.071 --rc geninfo_all_blocks=1 00:21:25.071 --rc geninfo_unexecuted_blocks=1 00:21:25.071 00:21:25.071 ' 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:25.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.071 --rc genhtml_branch_coverage=1 00:21:25.071 --rc genhtml_function_coverage=1 00:21:25.071 --rc genhtml_legend=1 00:21:25.071 --rc geninfo_all_blocks=1 00:21:25.071 --rc geninfo_unexecuted_blocks=1 00:21:25.071 00:21:25.071 ' 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:25.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.071 --rc genhtml_branch_coverage=1 00:21:25.071 --rc genhtml_function_coverage=1 00:21:25.071 --rc genhtml_legend=1 00:21:25.071 --rc geninfo_all_blocks=1 00:21:25.071 --rc geninfo_unexecuted_blocks=1 00:21:25.071 00:21:25.071 ' 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.071 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.072 10:34:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:31.639 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:31.639 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:31.639 Found net devices under 0000:af:00.0: cvl_0_0 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:31.639 Found net devices under 0000:af:00.1: cvl_0_1 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:31.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:21:31.639 00:21:31.639 --- 10.0.0.2 ping statistics --- 00:21:31.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.639 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:21:31.639 00:21:31.639 --- 10.0.0.1 ping statistics --- 00:21:31.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.639 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.639 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1585269 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1585269 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1585269 ']' 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.640 10:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 [2024-12-12 10:35:05.029040] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:21:31.640 [2024-12-12 10:35:05.029086] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.640 [2024-12-12 10:35:05.107501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:31.640 [2024-12-12 10:35:05.148021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.640 [2024-12-12 10:35:05.148056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.640 [2024-12-12 10:35:05.148063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.640 [2024-12-12 10:35:05.148069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.640 [2024-12-12 10:35:05.148074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.640 [2024-12-12 10:35:05.149295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.640 [2024-12-12 10:35:05.149405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.640 [2024-12-12 10:35:05.149405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 [2024-12-12 10:35:05.285965] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 Malloc0 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 [2024-12-12 10:35:05.344812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 [2024-12-12 10:35:05.356756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 Malloc1 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1585290 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1585290 /var/tmp/bdevperf.sock 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1585290 ']' 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.640 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.900 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.900 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:31.900 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:31.900 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.900 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.900 NVMe0n1 00:21:31.900 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.901 1 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.901 request: 00:21:31.901 { 00:21:31.901 "name": "NVMe0", 00:21:31.901 "trtype": "tcp", 00:21:31.901 "traddr": "10.0.0.2", 00:21:31.901 "adrfam": "ipv4", 00:21:31.901 "trsvcid": "4420", 00:21:31.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.901 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:31.901 "hostaddr": "10.0.0.1", 00:21:31.901 "prchk_reftag": false, 00:21:31.901 "prchk_guard": false, 00:21:31.901 "hdgst": false, 00:21:31.901 "ddgst": false, 00:21:31.901 "allow_unrecognized_csi": false, 00:21:31.901 "method": "bdev_nvme_attach_controller", 00:21:31.901 "req_id": 1 00:21:31.901 } 00:21:31.901 Got JSON-RPC error response 00:21:31.901 response: 00:21:31.901 { 00:21:31.901 "code": -114, 00:21:31.901 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:31.901 } 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.901 request: 00:21:31.901 { 00:21:31.901 "name": "NVMe0", 00:21:31.901 "trtype": "tcp", 00:21:31.901 "traddr": "10.0.0.2", 00:21:31.901 "adrfam": "ipv4", 00:21:31.901 "trsvcid": "4420", 00:21:31.901 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:31.901 "hostaddr": "10.0.0.1", 00:21:31.901 "prchk_reftag": false, 00:21:31.901 "prchk_guard": false, 00:21:31.901 "hdgst": false, 00:21:31.901 "ddgst": false, 00:21:31.901 "allow_unrecognized_csi": false, 00:21:31.901 "method": "bdev_nvme_attach_controller", 00:21:31.901 "req_id": 1 00:21:31.901 } 00:21:31.901 Got JSON-RPC error response 00:21:31.901 response: 00:21:31.901 { 00:21:31.901 "code": -114, 00:21:31.901 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:31.901 } 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.901 request: 00:21:31.901 { 00:21:31.901 "name": "NVMe0", 00:21:31.901 "trtype": "tcp", 00:21:31.901 "traddr": "10.0.0.2", 00:21:31.901 "adrfam": "ipv4", 00:21:31.901 "trsvcid": "4420", 00:21:31.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.901 "hostaddr": "10.0.0.1", 00:21:31.901 "prchk_reftag": false, 00:21:31.901 "prchk_guard": false, 00:21:31.901 "hdgst": false, 00:21:31.901 "ddgst": false, 00:21:31.901 "multipath": "disable", 00:21:31.901 "allow_unrecognized_csi": false, 00:21:31.901 "method": "bdev_nvme_attach_controller", 00:21:31.901 "req_id": 1 00:21:31.901 } 00:21:31.901 Got JSON-RPC error response 00:21:31.901 response: 00:21:31.901 { 00:21:31.901 "code": -114, 00:21:31.901 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:31.901 } 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.901 request: 00:21:31.901 { 00:21:31.901 "name": "NVMe0", 00:21:31.901 "trtype": "tcp", 00:21:31.901 "traddr": "10.0.0.2", 00:21:31.901 "adrfam": "ipv4", 00:21:31.901 "trsvcid": "4420", 00:21:31.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.901 "hostaddr": "10.0.0.1", 00:21:31.901 "prchk_reftag": false, 00:21:31.901 "prchk_guard": false, 00:21:31.901 "hdgst": false, 00:21:31.901 "ddgst": false, 00:21:31.901 "multipath": "failover", 00:21:31.901 "allow_unrecognized_csi": false, 00:21:31.901 "method": "bdev_nvme_attach_controller", 00:21:31.901 "req_id": 1 00:21:31.901 } 00:21:31.901 Got JSON-RPC error response 00:21:31.901 response: 00:21:31.901 { 00:21:31.901 "code": -114, 00:21:31.901 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:31.901 } 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:31.901 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:31.902 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.902 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.902 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.902 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.902 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.902 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:32.160 NVMe0n1 00:21:32.160 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.160 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.160 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.160 10:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:32.160 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:32.160 10:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:33.538 { 00:21:33.538 "results": [ 00:21:33.538 { 00:21:33.538 "job": "NVMe0n1", 00:21:33.538 "core_mask": "0x1", 00:21:33.538 "workload": "write", 00:21:33.538 "status": "finished", 00:21:33.538 "queue_depth": 128, 00:21:33.538 "io_size": 4096, 00:21:33.538 "runtime": 1.007852, 00:21:33.538 "iops": 25154.486968324716, 00:21:33.538 "mibps": 98.25971472001842, 00:21:33.538 "io_failed": 0, 00:21:33.538 "io_timeout": 0, 00:21:33.538 "avg_latency_us": 5082.2309888954, 00:21:33.538 "min_latency_us": 1693.0133333333333, 00:21:33.538 "max_latency_us": 10173.683809523809 00:21:33.538 } 00:21:33.538 ], 00:21:33.538 "core_count": 1 00:21:33.538 } 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1585290 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1585290 ']' 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1585290 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1585290 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1585290' 00:21:33.538 killing process with pid 1585290 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1585290 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1585290 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:33.538 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:33.538 [2024-12-12 10:35:05.461885] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:21:33.538 [2024-12-12 10:35:05.461933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585290 ] 00:21:33.538 [2024-12-12 10:35:05.535531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.538 [2024-12-12 10:35:05.575751] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.538 [2024-12-12 10:35:06.081648] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 10d2992c-6d0c-40be-9596-fccecb8cbc7a already exists 00:21:33.538 [2024-12-12 10:35:06.081677] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:10d2992c-6d0c-40be-9596-fccecb8cbc7a alias for bdev NVMe1n1 00:21:33.538 [2024-12-12 10:35:06.081685] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:33.538 Running I/O for 1 seconds... 00:21:33.538 25097.00 IOPS, 98.04 MiB/s 00:21:33.538 Latency(us) 00:21:33.538 [2024-12-12T09:35:07.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.538 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:33.538 NVMe0n1 : 1.01 25154.49 98.26 0.00 0.00 5082.23 1693.01 10173.68 00:21:33.538 [2024-12-12T09:35:07.561Z] =================================================================================================================== 00:21:33.538 [2024-12-12T09:35:07.561Z] Total : 25154.49 98.26 0.00 0.00 5082.23 1693.01 10173.68 00:21:33.538 Received shutdown signal, test time was about 1.000000 seconds 00:21:33.538 00:21:33.538 Latency(us) 00:21:33.538 [2024-12-12T09:35:07.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.538 [2024-12-12T09:35:07.561Z] =================================================================================================================== 00:21:33.538 [2024-12-12T09:35:07.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.538 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.538 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.538 rmmod nvme_tcp 00:21:33.538 rmmod nvme_fabrics 00:21:33.538 rmmod nvme_keyring 00:21:33.797 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:33.797 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:33.797 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:33.797 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1585269 ']' 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1585269 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1585269 ']' 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1585269 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1585269 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1585269' 00:21:33.798 killing process with pid 1585269 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1585269 00:21:33.798 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1585269 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.056 10:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.959 10:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.959 00:21:35.959 real 0m11.072s 00:21:35.959 user 0m11.912s 00:21:35.959 sys 0m5.207s 00:21:35.959 10:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.959 10:35:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.959 ************************************ 00:21:35.959 END TEST nvmf_multicontroller 00:21:35.959 ************************************ 00:21:35.959 10:35:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:35.959 10:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:35.959 10:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.959 10:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.959 ************************************ 00:21:35.959 START TEST nvmf_aer 00:21:35.959 ************************************ 00:21:35.959 10:35:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:36.219 * Looking for test storage... 00:21:36.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:36.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.219 --rc genhtml_branch_coverage=1 00:21:36.219 --rc genhtml_function_coverage=1 00:21:36.219 --rc genhtml_legend=1 00:21:36.219 --rc geninfo_all_blocks=1 00:21:36.219 --rc geninfo_unexecuted_blocks=1 00:21:36.219 00:21:36.219 ' 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:36.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.219 --rc genhtml_branch_coverage=1 00:21:36.219 --rc genhtml_function_coverage=1 00:21:36.219 --rc genhtml_legend=1 00:21:36.219 --rc geninfo_all_blocks=1 00:21:36.219 --rc geninfo_unexecuted_blocks=1 00:21:36.219 00:21:36.219 ' 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:36.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.219 --rc genhtml_branch_coverage=1 00:21:36.219 --rc genhtml_function_coverage=1 00:21:36.219 --rc genhtml_legend=1 00:21:36.219 --rc geninfo_all_blocks=1 00:21:36.219 --rc geninfo_unexecuted_blocks=1 00:21:36.219 00:21:36.219 ' 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:36.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.219 --rc genhtml_branch_coverage=1 00:21:36.219 --rc genhtml_function_coverage=1 00:21:36.219 --rc genhtml_legend=1 00:21:36.219 --rc geninfo_all_blocks=1 00:21:36.219 --rc geninfo_unexecuted_blocks=1 00:21:36.219 00:21:36.219 ' 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.219 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:36.220 10:35:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:42.789 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:42.789 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:42.789 Found net devices under 0000:af:00.0: cvl_0_0 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:42.789 Found net devices under 0000:af:00.1: cvl_0_1 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:42.789 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:21:42.790 00:21:42.790 --- 10.0.0.2 ping statistics --- 00:21:42.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.790 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:21:42.790 00:21:42.790 --- 10.0.0.1 ping statistics --- 00:21:42.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.790 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:42.790 10:35:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1589179 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1589179 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1589179 ']' 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 [2024-12-12 10:35:16.083854] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:21:42.790 [2024-12-12 10:35:16.083898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.790 [2024-12-12 10:35:16.158998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.790 [2024-12-12 10:35:16.202196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.790 [2024-12-12 10:35:16.202233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.790 [2024-12-12 10:35:16.202240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.790 [2024-12-12 10:35:16.202246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.790 [2024-12-12 10:35:16.202251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.790 [2024-12-12 10:35:16.203725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.790 [2024-12-12 10:35:16.203833] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.790 [2024-12-12 10:35:16.203941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.790 [2024-12-12 10:35:16.203942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 [2024-12-12 10:35:16.350054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 Malloc0 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 [2024-12-12 10:35:16.409257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 [ 00:21:42.790 { 00:21:42.790 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:42.790 "subtype": "Discovery", 00:21:42.790 "listen_addresses": [], 00:21:42.790 "allow_any_host": true, 00:21:42.790 "hosts": [] 00:21:42.790 }, 00:21:42.790 { 00:21:42.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.790 "subtype": "NVMe", 00:21:42.790 "listen_addresses": [ 00:21:42.790 { 00:21:42.790 "trtype": "TCP", 00:21:42.790 "adrfam": "IPv4", 00:21:42.790 "traddr": "10.0.0.2", 00:21:42.790 "trsvcid": "4420" 00:21:42.790 } 00:21:42.790 ], 00:21:42.790 "allow_any_host": true, 00:21:42.790 "hosts": [], 00:21:42.790 "serial_number": "SPDK00000000000001", 00:21:42.790 "model_number": "SPDK bdev Controller", 00:21:42.790 "max_namespaces": 2, 00:21:42.790 "min_cntlid": 1, 00:21:42.790 "max_cntlid": 65519, 00:21:42.790 "namespaces": [ 00:21:42.790 { 00:21:42.790 "nsid": 1, 00:21:42.790 "bdev_name": "Malloc0", 00:21:42.790 "name": "Malloc0", 00:21:42.790 "nguid": "C0DB34DEA6C047E287D6E6E75DA8F658", 00:21:42.790 "uuid": "c0db34de-a6c0-47e2-87d6-e6e75da8f658" 00:21:42.790 } 00:21:42.790 ] 00:21:42.790 } 00:21:42.790 ] 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1589241 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:42.790 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.791 Malloc1 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.791 [ 00:21:42.791 { 00:21:42.791 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:42.791 "subtype": "Discovery", 00:21:42.791 "listen_addresses": [], 00:21:42.791 "allow_any_host": true, 00:21:42.791 "hosts": [] 00:21:42.791 }, 00:21:42.791 { 00:21:42.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.791 "subtype": "NVMe", 00:21:42.791 "listen_addresses": [ 00:21:42.791 { 00:21:42.791 "trtype": "TCP", 00:21:42.791 "adrfam": "IPv4", 00:21:42.791 "traddr": "10.0.0.2", 00:21:42.791 "trsvcid": "4420" 00:21:42.791 } 00:21:42.791 ], 00:21:42.791 "allow_any_host": true, 00:21:42.791 "hosts": [], 00:21:42.791 "serial_number": "SPDK00000000000001", 00:21:42.791 "model_number": "SPDK bdev Controller", 00:21:42.791 "max_namespaces": 2, 00:21:42.791 "min_cntlid": 1, 00:21:42.791 "max_cntlid": 65519, 00:21:42.791 "namespaces": [ 00:21:42.791 { 00:21:42.791 "nsid": 1, 00:21:42.791 "bdev_name": "Malloc0", 00:21:42.791 "name": "Malloc0", 00:21:42.791 "nguid": "C0DB34DEA6C047E287D6E6E75DA8F658", 00:21:42.791 "uuid": "c0db34de-a6c0-47e2-87d6-e6e75da8f658" 00:21:42.791 }, 00:21:42.791 { 00:21:42.791 "nsid": 2, 00:21:42.791 "bdev_name": "Malloc1", 00:21:42.791 "name": "Malloc1", 00:21:42.791 "nguid": "AEEA36BE7AEB4597B3A4F9A3F7A62AC8", 00:21:42.791 Asynchronous Event Request test 00:21:42.791 Attaching to 10.0.0.2 00:21:42.791 Attached to 10.0.0.2 00:21:42.791 Registering asynchronous event callbacks... 00:21:42.791 Starting namespace attribute notice tests for all controllers... 00:21:42.791 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:42.791 aer_cb - Changed Namespace 00:21:42.791 Cleaning up... 00:21:42.791 "uuid": "aeea36be-7aeb-4597-b3a4-f9a3f7a62ac8" 00:21:42.791 } 00:21:42.791 ] 00:21:42.791 } 00:21:42.791 ] 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1589241 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.791 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.791 rmmod nvme_tcp 00:21:42.791 rmmod nvme_fabrics 00:21:43.050 rmmod nvme_keyring 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1589179 ']' 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1589179 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1589179 ']' 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1589179 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1589179 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1589179' 00:21:43.050 killing process with pid 1589179 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1589179 00:21:43.050 10:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1589179 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.050 10:35:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.584 10:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.585 00:21:45.585 real 0m9.150s 00:21:45.585 user 0m5.146s 00:21:45.585 sys 0m4.787s 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.585 ************************************ 00:21:45.585 END TEST nvmf_aer 00:21:45.585 ************************************ 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.585 ************************************ 00:21:45.585 START TEST nvmf_async_init 00:21:45.585 ************************************ 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:45.585 * Looking for test storage... 00:21:45.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:45.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.585 --rc genhtml_branch_coverage=1 00:21:45.585 --rc genhtml_function_coverage=1 00:21:45.585 --rc genhtml_legend=1 00:21:45.585 --rc geninfo_all_blocks=1 00:21:45.585 --rc geninfo_unexecuted_blocks=1 00:21:45.585 00:21:45.585 ' 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:45.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.585 --rc genhtml_branch_coverage=1 00:21:45.585 --rc genhtml_function_coverage=1 00:21:45.585 --rc genhtml_legend=1 00:21:45.585 --rc geninfo_all_blocks=1 00:21:45.585 --rc geninfo_unexecuted_blocks=1 00:21:45.585 00:21:45.585 ' 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:45.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.585 --rc genhtml_branch_coverage=1 00:21:45.585 --rc genhtml_function_coverage=1 00:21:45.585 --rc genhtml_legend=1 00:21:45.585 --rc geninfo_all_blocks=1 00:21:45.585 --rc geninfo_unexecuted_blocks=1 00:21:45.585 00:21:45.585 ' 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:45.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.585 --rc genhtml_branch_coverage=1 00:21:45.585 --rc genhtml_function_coverage=1 00:21:45.585 --rc genhtml_legend=1 00:21:45.585 --rc geninfo_all_blocks=1 00:21:45.585 --rc geninfo_unexecuted_blocks=1 00:21:45.585 00:21:45.585 ' 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.585 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=89c3b76b99dc47d3bbef5282f1f2d049 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.586 10:35:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:52.153 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:52.153 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:52.153 Found net devices under 0000:af:00.0: cvl_0_0 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:52.153 Found net devices under 0000:af:00.1: cvl_0_1 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.153 10:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:21:52.153 00:21:52.153 --- 10.0.0.2 ping statistics --- 00:21:52.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.153 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:21:52.153 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:21:52.153 00:21:52.153 --- 10.0.0.1 ping statistics --- 00:21:52.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.154 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1592712 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1592712 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1592712 ']' 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 [2024-12-12 10:35:25.339722] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:21:52.154 [2024-12-12 10:35:25.339771] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.154 [2024-12-12 10:35:25.419800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.154 [2024-12-12 10:35:25.458561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.154 [2024-12-12 10:35:25.458599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.154 [2024-12-12 10:35:25.458607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.154 [2024-12-12 10:35:25.458612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.154 [2024-12-12 10:35:25.458617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.154 [2024-12-12 10:35:25.459114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 [2024-12-12 10:35:25.603110] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 null0 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 89c3b76b99dc47d3bbef5282f1f2d049 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 [2024-12-12 10:35:25.647374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 nvme0n1 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 [ 00:21:52.154 { 00:21:52.154 "name": "nvme0n1", 00:21:52.154 "aliases": [ 00:21:52.154 "89c3b76b-99dc-47d3-bbef-5282f1f2d049" 00:21:52.154 ], 00:21:52.154 "product_name": "NVMe disk", 00:21:52.154 "block_size": 512, 00:21:52.154 "num_blocks": 2097152, 00:21:52.154 "uuid": "89c3b76b-99dc-47d3-bbef-5282f1f2d049", 00:21:52.154 "numa_id": 1, 00:21:52.154 "assigned_rate_limits": { 00:21:52.154 "rw_ios_per_sec": 0, 00:21:52.154 "rw_mbytes_per_sec": 0, 00:21:52.154 "r_mbytes_per_sec": 0, 00:21:52.154 "w_mbytes_per_sec": 0 00:21:52.154 }, 00:21:52.154 "claimed": false, 00:21:52.154 "zoned": false, 00:21:52.154 "supported_io_types": { 00:21:52.154 "read": true, 00:21:52.154 "write": true, 00:21:52.154 "unmap": false, 00:21:52.154 "flush": true, 00:21:52.154 "reset": true, 00:21:52.154 "nvme_admin": true, 00:21:52.154 "nvme_io": true, 00:21:52.154 "nvme_io_md": false, 00:21:52.154 "write_zeroes": true, 00:21:52.154 "zcopy": false, 00:21:52.154 "get_zone_info": false, 00:21:52.154 "zone_management": false, 00:21:52.154 "zone_append": false, 00:21:52.154 "compare": true, 00:21:52.154 "compare_and_write": true, 00:21:52.154 "abort": true, 00:21:52.154 "seek_hole": false, 00:21:52.154 "seek_data": false, 00:21:52.154 "copy": true, 00:21:52.154 "nvme_iov_md": false 00:21:52.154 }, 00:21:52.154 "memory_domains": [ 00:21:52.154 { 00:21:52.154 "dma_device_id": "system", 00:21:52.154 "dma_device_type": 1 00:21:52.154 } 00:21:52.154 ], 00:21:52.154 "driver_specific": { 00:21:52.154 "nvme": [ 00:21:52.154 { 00:21:52.154 "trid": { 00:21:52.154 "trtype": "TCP", 00:21:52.154 "adrfam": "IPv4", 00:21:52.154 "traddr": "10.0.0.2", 00:21:52.154 "trsvcid": "4420", 00:21:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:52.154 }, 00:21:52.154 "ctrlr_data": { 00:21:52.154 "cntlid": 1, 00:21:52.154 "vendor_id": "0x8086", 00:21:52.154 "model_number": "SPDK bdev Controller", 00:21:52.154 "serial_number": "00000000000000000000", 00:21:52.154 "firmware_revision": "25.01", 00:21:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:52.154 "oacs": { 00:21:52.154 "security": 0, 00:21:52.154 "format": 0, 00:21:52.154 "firmware": 0, 00:21:52.154 "ns_manage": 0 00:21:52.154 }, 00:21:52.154 "multi_ctrlr": true, 00:21:52.154 "ana_reporting": false 00:21:52.154 }, 00:21:52.154 "vs": { 00:21:52.154 "nvme_version": "1.3" 00:21:52.154 }, 00:21:52.154 "ns_data": { 00:21:52.154 "id": 1, 00:21:52.154 "can_share": true 00:21:52.154 } 00:21:52.154 } 00:21:52.154 ], 00:21:52.154 "mp_policy": "active_passive" 00:21:52.154 } 00:21:52.154 } 00:21:52.154 ] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.154 10:35:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 [2024-12-12 10:35:25.908841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:52.155 [2024-12-12 10:35:25.908913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa33250 (9): Bad file descriptor 00:21:52.155 [2024-12-12 10:35:26.040643] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 [ 00:21:52.155 { 00:21:52.155 "name": "nvme0n1", 00:21:52.155 "aliases": [ 00:21:52.155 "89c3b76b-99dc-47d3-bbef-5282f1f2d049" 00:21:52.155 ], 00:21:52.155 "product_name": "NVMe disk", 00:21:52.155 "block_size": 512, 00:21:52.155 "num_blocks": 2097152, 00:21:52.155 "uuid": "89c3b76b-99dc-47d3-bbef-5282f1f2d049", 00:21:52.155 "numa_id": 1, 00:21:52.155 "assigned_rate_limits": { 00:21:52.155 "rw_ios_per_sec": 0, 00:21:52.155 "rw_mbytes_per_sec": 0, 00:21:52.155 "r_mbytes_per_sec": 0, 00:21:52.155 "w_mbytes_per_sec": 0 00:21:52.155 }, 00:21:52.155 "claimed": false, 00:21:52.155 "zoned": false, 00:21:52.155 "supported_io_types": { 00:21:52.155 "read": true, 00:21:52.155 "write": true, 00:21:52.155 "unmap": false, 00:21:52.155 "flush": true, 00:21:52.155 "reset": true, 00:21:52.155 "nvme_admin": true, 00:21:52.155 "nvme_io": true, 00:21:52.155 "nvme_io_md": false, 00:21:52.155 "write_zeroes": true, 00:21:52.155 "zcopy": false, 00:21:52.155 "get_zone_info": false, 00:21:52.155 "zone_management": false, 00:21:52.155 "zone_append": false, 00:21:52.155 "compare": true, 00:21:52.155 "compare_and_write": true, 00:21:52.155 "abort": true, 00:21:52.155 "seek_hole": false, 00:21:52.155 "seek_data": false, 00:21:52.155 "copy": true, 00:21:52.155 "nvme_iov_md": false 00:21:52.155 }, 00:21:52.155 "memory_domains": [ 00:21:52.155 { 00:21:52.155 "dma_device_id": "system", 00:21:52.155 "dma_device_type": 1 00:21:52.155 } 00:21:52.155 ], 00:21:52.155 "driver_specific": { 00:21:52.155 "nvme": [ 00:21:52.155 { 00:21:52.155 "trid": { 00:21:52.155 "trtype": "TCP", 00:21:52.155 "adrfam": "IPv4", 00:21:52.155 "traddr": "10.0.0.2", 00:21:52.155 "trsvcid": "4420", 00:21:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:52.155 }, 00:21:52.155 "ctrlr_data": { 00:21:52.155 "cntlid": 2, 00:21:52.155 "vendor_id": "0x8086", 00:21:52.155 "model_number": "SPDK bdev Controller", 00:21:52.155 "serial_number": "00000000000000000000", 00:21:52.155 "firmware_revision": "25.01", 00:21:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:52.155 "oacs": { 00:21:52.155 "security": 0, 00:21:52.155 "format": 0, 00:21:52.155 "firmware": 0, 00:21:52.155 "ns_manage": 0 00:21:52.155 }, 00:21:52.155 "multi_ctrlr": true, 00:21:52.155 "ana_reporting": false 00:21:52.155 }, 00:21:52.155 "vs": { 00:21:52.155 "nvme_version": "1.3" 00:21:52.155 }, 00:21:52.155 "ns_data": { 00:21:52.155 "id": 1, 00:21:52.155 "can_share": true 00:21:52.155 } 00:21:52.155 } 00:21:52.155 ], 00:21:52.155 "mp_policy": "active_passive" 00:21:52.155 } 00:21:52.155 } 00:21:52.155 ] 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.luf5GUwjeM 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.luf5GUwjeM 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.luf5GUwjeM 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 [2024-12-12 10:35:26.113453] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:52.155 [2024-12-12 10:35:26.113556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.155 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 [2024-12-12 10:35:26.129512] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.414 nvme0n1 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.414 [ 00:21:52.414 { 00:21:52.414 "name": "nvme0n1", 00:21:52.414 "aliases": [ 00:21:52.414 "89c3b76b-99dc-47d3-bbef-5282f1f2d049" 00:21:52.414 ], 00:21:52.414 "product_name": "NVMe disk", 00:21:52.414 "block_size": 512, 00:21:52.414 "num_blocks": 2097152, 00:21:52.414 "uuid": "89c3b76b-99dc-47d3-bbef-5282f1f2d049", 00:21:52.414 "numa_id": 1, 00:21:52.414 "assigned_rate_limits": { 00:21:52.414 "rw_ios_per_sec": 0, 00:21:52.414 "rw_mbytes_per_sec": 0, 00:21:52.414 "r_mbytes_per_sec": 0, 00:21:52.414 "w_mbytes_per_sec": 0 00:21:52.414 }, 00:21:52.414 "claimed": false, 00:21:52.414 "zoned": false, 00:21:52.414 "supported_io_types": { 00:21:52.414 "read": true, 00:21:52.414 "write": true, 00:21:52.414 "unmap": false, 00:21:52.414 "flush": true, 00:21:52.414 "reset": true, 00:21:52.414 "nvme_admin": true, 00:21:52.414 "nvme_io": true, 00:21:52.414 "nvme_io_md": false, 00:21:52.414 "write_zeroes": true, 00:21:52.414 "zcopy": false, 00:21:52.414 "get_zone_info": false, 00:21:52.414 "zone_management": false, 00:21:52.414 "zone_append": false, 00:21:52.414 "compare": true, 00:21:52.414 "compare_and_write": true, 00:21:52.414 "abort": true, 00:21:52.414 "seek_hole": false, 00:21:52.414 "seek_data": false, 00:21:52.414 "copy": true, 00:21:52.414 "nvme_iov_md": false 00:21:52.414 }, 00:21:52.414 "memory_domains": [ 00:21:52.414 { 00:21:52.414 "dma_device_id": "system", 00:21:52.414 "dma_device_type": 1 00:21:52.414 } 00:21:52.414 ], 00:21:52.414 "driver_specific": { 00:21:52.414 "nvme": [ 00:21:52.414 { 00:21:52.414 "trid": { 00:21:52.414 "trtype": "TCP", 00:21:52.414 "adrfam": "IPv4", 00:21:52.414 "traddr": "10.0.0.2", 00:21:52.414 "trsvcid": "4421", 00:21:52.414 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:52.414 }, 00:21:52.414 "ctrlr_data": { 00:21:52.414 "cntlid": 3, 00:21:52.414 "vendor_id": "0x8086", 00:21:52.414 "model_number": "SPDK bdev Controller", 00:21:52.414 "serial_number": "00000000000000000000", 00:21:52.414 "firmware_revision": "25.01", 00:21:52.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:52.414 "oacs": { 00:21:52.414 "security": 0, 00:21:52.414 "format": 0, 00:21:52.414 "firmware": 0, 00:21:52.414 "ns_manage": 0 00:21:52.414 }, 00:21:52.414 "multi_ctrlr": true, 00:21:52.414 "ana_reporting": false 00:21:52.414 }, 00:21:52.414 "vs": { 00:21:52.414 "nvme_version": "1.3" 00:21:52.414 }, 00:21:52.414 "ns_data": { 00:21:52.414 "id": 1, 00:21:52.414 "can_share": true 00:21:52.414 } 00:21:52.414 } 00:21:52.414 ], 00:21:52.414 "mp_policy": "active_passive" 00:21:52.414 } 00:21:52.414 } 00:21:52.414 ] 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.luf5GUwjeM 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.414 rmmod nvme_tcp 00:21:52.414 rmmod nvme_fabrics 00:21:52.414 rmmod nvme_keyring 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1592712 ']' 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1592712 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1592712 ']' 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1592712 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1592712 00:21:52.414 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.415 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.415 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1592712' 00:21:52.415 killing process with pid 1592712 00:21:52.415 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1592712 00:21:52.415 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1592712 00:21:52.673 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.673 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:52.673 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:52.673 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:52.673 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:52.673 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:52.673 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:52.673 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.674 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.674 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.674 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.674 10:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.578 10:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.578 00:21:54.578 real 0m9.387s 00:21:54.578 user 0m3.073s 00:21:54.578 sys 0m4.739s 00:21:54.578 10:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.578 10:35:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:54.578 ************************************ 00:21:54.578 END TEST nvmf_async_init 00:21:54.578 ************************************ 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.837 ************************************ 00:21:54.837 START TEST dma 00:21:54.837 ************************************ 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:54.837 * Looking for test storage... 00:21:54.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.837 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:54.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.838 --rc genhtml_branch_coverage=1 00:21:54.838 --rc genhtml_function_coverage=1 00:21:54.838 --rc genhtml_legend=1 00:21:54.838 --rc geninfo_all_blocks=1 00:21:54.838 --rc geninfo_unexecuted_blocks=1 00:21:54.838 00:21:54.838 ' 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:54.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.838 --rc genhtml_branch_coverage=1 00:21:54.838 --rc genhtml_function_coverage=1 00:21:54.838 --rc genhtml_legend=1 00:21:54.838 --rc geninfo_all_blocks=1 00:21:54.838 --rc geninfo_unexecuted_blocks=1 00:21:54.838 00:21:54.838 ' 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:54.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.838 --rc genhtml_branch_coverage=1 00:21:54.838 --rc genhtml_function_coverage=1 00:21:54.838 --rc genhtml_legend=1 00:21:54.838 --rc geninfo_all_blocks=1 00:21:54.838 --rc geninfo_unexecuted_blocks=1 00:21:54.838 00:21:54.838 ' 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:54.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.838 --rc genhtml_branch_coverage=1 00:21:54.838 --rc genhtml_function_coverage=1 00:21:54.838 --rc genhtml_legend=1 00:21:54.838 --rc geninfo_all_blocks=1 00:21:54.838 --rc geninfo_unexecuted_blocks=1 00:21:54.838 00:21:54.838 ' 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.838 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.097 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.097 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.097 10:35:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:55.098 00:21:55.098 real 0m0.212s 00:21:55.098 user 0m0.132s 00:21:55.098 sys 0m0.094s 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:55.098 ************************************ 00:21:55.098 END TEST dma 00:21:55.098 ************************************ 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.098 ************************************ 00:21:55.098 START TEST nvmf_identify 00:21:55.098 ************************************ 00:21:55.098 10:35:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:55.098 * Looking for test storage... 00:21:55.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:55.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.098 --rc genhtml_branch_coverage=1 00:21:55.098 --rc genhtml_function_coverage=1 00:21:55.098 --rc genhtml_legend=1 00:21:55.098 --rc geninfo_all_blocks=1 00:21:55.098 --rc geninfo_unexecuted_blocks=1 00:21:55.098 00:21:55.098 ' 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:55.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.098 --rc genhtml_branch_coverage=1 00:21:55.098 --rc genhtml_function_coverage=1 00:21:55.098 --rc genhtml_legend=1 00:21:55.098 --rc geninfo_all_blocks=1 00:21:55.098 --rc geninfo_unexecuted_blocks=1 00:21:55.098 00:21:55.098 ' 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:55.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.098 --rc genhtml_branch_coverage=1 00:21:55.098 --rc genhtml_function_coverage=1 00:21:55.098 --rc genhtml_legend=1 00:21:55.098 --rc geninfo_all_blocks=1 00:21:55.098 --rc geninfo_unexecuted_blocks=1 00:21:55.098 00:21:55.098 ' 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:55.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.098 --rc genhtml_branch_coverage=1 00:21:55.098 --rc genhtml_function_coverage=1 00:21:55.098 --rc genhtml_legend=1 00:21:55.098 --rc geninfo_all_blocks=1 00:21:55.098 --rc geninfo_unexecuted_blocks=1 00:21:55.098 00:21:55.098 ' 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.098 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:55.358 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.359 10:35:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.931 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:01.932 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:01.932 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:01.932 Found net devices under 0000:af:00.0: cvl_0_0 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:01.932 Found net devices under 0000:af:00.1: cvl_0_1 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:22:01.932 00:22:01.932 --- 10.0.0.2 ping statistics --- 00:22:01.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.932 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:22:01.932 00:22:01.932 --- 10.0.0.1 ping statistics --- 00:22:01.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.932 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1596464 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1596464 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1596464 ']' 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.932 10:35:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.932 [2024-12-12 10:35:35.034179] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:22:01.932 [2024-12-12 10:35:35.034225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.932 [2024-12-12 10:35:35.112708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.932 [2024-12-12 10:35:35.154234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.932 [2024-12-12 10:35:35.154273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.933 [2024-12-12 10:35:35.154280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.933 [2024-12-12 10:35:35.154286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.933 [2024-12-12 10:35:35.154291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.933 [2024-12-12 10:35:35.155666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.933 [2024-12-12 10:35:35.155780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.933 [2024-12-12 10:35:35.155866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.933 [2024-12-12 10:35:35.155866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.933 [2024-12-12 10:35:35.265859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.933 Malloc0 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.933 [2024-12-12 10:35:35.367419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.933 [ 00:22:01.933 { 00:22:01.933 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:01.933 "subtype": "Discovery", 00:22:01.933 "listen_addresses": [ 00:22:01.933 { 00:22:01.933 "trtype": "TCP", 00:22:01.933 "adrfam": "IPv4", 00:22:01.933 "traddr": "10.0.0.2", 00:22:01.933 "trsvcid": "4420" 00:22:01.933 } 00:22:01.933 ], 00:22:01.933 "allow_any_host": true, 00:22:01.933 "hosts": [] 00:22:01.933 }, 00:22:01.933 { 00:22:01.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.933 "subtype": "NVMe", 00:22:01.933 "listen_addresses": [ 00:22:01.933 { 00:22:01.933 "trtype": "TCP", 00:22:01.933 "adrfam": "IPv4", 00:22:01.933 "traddr": "10.0.0.2", 00:22:01.933 "trsvcid": "4420" 00:22:01.933 } 00:22:01.933 ], 00:22:01.933 "allow_any_host": true, 00:22:01.933 "hosts": [], 00:22:01.933 "serial_number": "SPDK00000000000001", 00:22:01.933 "model_number": "SPDK bdev Controller", 00:22:01.933 "max_namespaces": 32, 00:22:01.933 "min_cntlid": 1, 00:22:01.933 "max_cntlid": 65519, 00:22:01.933 "namespaces": [ 00:22:01.933 { 00:22:01.933 "nsid": 1, 00:22:01.933 "bdev_name": "Malloc0", 00:22:01.933 "name": "Malloc0", 00:22:01.933 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:01.933 "eui64": "ABCDEF0123456789", 00:22:01.933 "uuid": "d8f99c3f-44a1-421e-bb31-ceeb6201a611" 00:22:01.933 } 00:22:01.933 ] 00:22:01.933 } 00:22:01.933 ] 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.933 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:01.933 [2024-12-12 10:35:35.423255] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:22:01.933 [2024-12-12 10:35:35.423289] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596490 ] 00:22:01.933 [2024-12-12 10:35:35.469261] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:01.933 [2024-12-12 10:35:35.469302] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:01.933 [2024-12-12 10:35:35.469307] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:01.933 [2024-12-12 10:35:35.469317] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:01.933 [2024-12-12 10:35:35.469325] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:01.933 [2024-12-12 10:35:35.469752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:01.933 [2024-12-12 10:35:35.469784] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x764690 0 00:22:01.933 [2024-12-12 10:35:35.483584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:01.933 [2024-12-12 10:35:35.483614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:01.933 [2024-12-12 10:35:35.483619] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:01.933 [2024-12-12 10:35:35.483622] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:01.933 [2024-12-12 10:35:35.483654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.933 [2024-12-12 10:35:35.483660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.933 [2024-12-12 10:35:35.483663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.933 [2024-12-12 10:35:35.483677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:01.933 [2024-12-12 10:35:35.483694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.933 [2024-12-12 10:35:35.491578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.933 [2024-12-12 10:35:35.491586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.933 [2024-12-12 10:35:35.491590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.933 [2024-12-12 10:35:35.491594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.933 [2024-12-12 10:35:35.491605] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:01.933 [2024-12-12 10:35:35.491612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:01.933 [2024-12-12 10:35:35.491617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:01.933 [2024-12-12 10:35:35.491627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.933 [2024-12-12 10:35:35.491631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.933 [2024-12-12 10:35:35.491634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.933 [2024-12-12 10:35:35.491640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.933 [2024-12-12 10:35:35.491653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.933 [2024-12-12 10:35:35.491812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.933 [2024-12-12 10:35:35.491817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.933 [2024-12-12 10:35:35.491820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.933 [2024-12-12 10:35:35.491823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.933 [2024-12-12 10:35:35.491828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:01.933 [2024-12-12 10:35:35.491835] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:01.933 [2024-12-12 10:35:35.491841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.933 [2024-12-12 10:35:35.491844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.933 [2024-12-12 10:35:35.491847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.933 [2024-12-12 10:35:35.491853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.933 [2024-12-12 10:35:35.491863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.933 [2024-12-12 10:35:35.491928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.933 [2024-12-12 10:35:35.491934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.933 [2024-12-12 10:35:35.491937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.933 [2024-12-12 10:35:35.491940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.933 [2024-12-12 10:35:35.491945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:01.933 [2024-12-12 10:35:35.491952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:01.934 [2024-12-12 10:35:35.491960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.491963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.491966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.491972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.934 [2024-12-12 10:35:35.491981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.934 [2024-12-12 10:35:35.492046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.934 [2024-12-12 10:35:35.492051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.934 [2024-12-12 10:35:35.492054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.934 [2024-12-12 10:35:35.492062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:01.934 [2024-12-12 10:35:35.492071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.492083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.934 [2024-12-12 10:35:35.492092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.934 [2024-12-12 10:35:35.492151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.934 [2024-12-12 10:35:35.492157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.934 [2024-12-12 10:35:35.492160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.934 [2024-12-12 10:35:35.492168] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:01.934 [2024-12-12 10:35:35.492172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:01.934 [2024-12-12 10:35:35.492178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:01.934 [2024-12-12 10:35:35.492286] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:01.934 [2024-12-12 10:35:35.492290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:01.934 [2024-12-12 10:35:35.492298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.492310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.934 [2024-12-12 10:35:35.492319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.934 [2024-12-12 10:35:35.492384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.934 [2024-12-12 10:35:35.492389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.934 [2024-12-12 10:35:35.492392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.934 [2024-12-12 10:35:35.492400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:01.934 [2024-12-12 10:35:35.492410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.492422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.934 [2024-12-12 10:35:35.492431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.934 [2024-12-12 10:35:35.492494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.934 [2024-12-12 10:35:35.492499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.934 [2024-12-12 10:35:35.492502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.934 [2024-12-12 10:35:35.492509] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:01.934 [2024-12-12 10:35:35.492514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:01.934 [2024-12-12 10:35:35.492520] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:01.934 [2024-12-12 10:35:35.492531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:01.934 [2024-12-12 10:35:35.492539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.492548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.934 [2024-12-12 10:35:35.492558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.934 [2024-12-12 10:35:35.492653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.934 [2024-12-12 10:35:35.492660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.934 [2024-12-12 10:35:35.492663] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492666] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x764690): datao=0, datal=4096, cccid=0 00:22:01.934 [2024-12-12 10:35:35.492670] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c6100) on tqpair(0x764690): expected_datao=0, payload_size=4096 00:22:01.934 [2024-12-12 10:35:35.492674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492688] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.492693] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.934 [2024-12-12 10:35:35.533722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.934 [2024-12-12 10:35:35.533725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.934 [2024-12-12 10:35:35.533738] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:01.934 [2024-12-12 10:35:35.533742] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:01.934 [2024-12-12 10:35:35.533746] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:01.934 [2024-12-12 10:35:35.533754] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:01.934 [2024-12-12 10:35:35.533758] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:01.934 [2024-12-12 10:35:35.533762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:01.934 [2024-12-12 10:35:35.533774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:01.934 [2024-12-12 10:35:35.533783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.533797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.934 [2024-12-12 10:35:35.533810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.934 [2024-12-12 10:35:35.533870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.934 [2024-12-12 10:35:35.533875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.934 [2024-12-12 10:35:35.533878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.934 [2024-12-12 10:35:35.533888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.533899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.934 [2024-12-12 10:35:35.533905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.533916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.934 [2024-12-12 10:35:35.533920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.533931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.934 [2024-12-12 10:35:35.533936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.533947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.934 [2024-12-12 10:35:35.533951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:01.934 [2024-12-12 10:35:35.533964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:01.934 [2024-12-12 10:35:35.533970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.934 [2024-12-12 10:35:35.533973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x764690) 00:22:01.934 [2024-12-12 10:35:35.533980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.935 [2024-12-12 10:35:35.533992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6100, cid 0, qid 0 00:22:01.935 [2024-12-12 10:35:35.533996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6280, cid 1, qid 0 00:22:01.935 [2024-12-12 10:35:35.534000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6400, cid 2, qid 0 00:22:01.935 [2024-12-12 10:35:35.534004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.935 [2024-12-12 10:35:35.534008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6700, cid 4, qid 0 00:22:01.935 [2024-12-12 10:35:35.534103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.935 [2024-12-12 10:35:35.534108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.935 [2024-12-12 10:35:35.534111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6700) on tqpair=0x764690 00:22:01.935 [2024-12-12 10:35:35.534119] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:01.935 [2024-12-12 10:35:35.534124] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:01.935 [2024-12-12 10:35:35.534133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x764690) 00:22:01.935 [2024-12-12 10:35:35.534142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.935 [2024-12-12 10:35:35.534153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6700, cid 4, qid 0 00:22:01.935 [2024-12-12 10:35:35.534225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.935 [2024-12-12 10:35:35.534231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.935 [2024-12-12 10:35:35.534234] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534237] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x764690): datao=0, datal=4096, cccid=4 00:22:01.935 [2024-12-12 10:35:35.534241] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c6700) on tqpair(0x764690): expected_datao=0, payload_size=4096 00:22:01.935 [2024-12-12 10:35:35.534244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534259] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534263] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.935 [2024-12-12 10:35:35.534306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.935 [2024-12-12 10:35:35.534309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6700) on tqpair=0x764690 00:22:01.935 [2024-12-12 10:35:35.534323] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:01.935 [2024-12-12 10:35:35.534346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x764690) 00:22:01.935 [2024-12-12 10:35:35.534355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.935 [2024-12-12 10:35:35.534361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x764690) 00:22:01.935 [2024-12-12 10:35:35.534374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.935 [2024-12-12 10:35:35.534386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6700, cid 4, qid 0 00:22:01.935 [2024-12-12 10:35:35.534391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6880, cid 5, qid 0 00:22:01.935 [2024-12-12 10:35:35.534492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.935 [2024-12-12 10:35:35.534497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.935 [2024-12-12 10:35:35.534500] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534503] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x764690): datao=0, datal=1024, cccid=4 00:22:01.935 [2024-12-12 10:35:35.534507] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c6700) on tqpair(0x764690): expected_datao=0, payload_size=1024 00:22:01.935 [2024-12-12 10:35:35.534510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534516] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534519] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.935 [2024-12-12 10:35:35.534528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.935 [2024-12-12 10:35:35.534531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.534534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6880) on tqpair=0x764690 00:22:01.935 [2024-12-12 10:35:35.578579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.935 [2024-12-12 10:35:35.578590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.935 [2024-12-12 10:35:35.578593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.578597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6700) on tqpair=0x764690 00:22:01.935 [2024-12-12 10:35:35.578608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.578612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x764690) 00:22:01.935 [2024-12-12 10:35:35.578618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.935 [2024-12-12 10:35:35.578633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6700, cid 4, qid 0 00:22:01.935 [2024-12-12 10:35:35.578738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.935 [2024-12-12 10:35:35.578743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.935 [2024-12-12 10:35:35.578746] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.578749] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x764690): datao=0, datal=3072, cccid=4 00:22:01.935 [2024-12-12 10:35:35.578753] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c6700) on tqpair(0x764690): expected_datao=0, payload_size=3072 00:22:01.935 [2024-12-12 10:35:35.578756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.578768] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.578772] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.619635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.935 [2024-12-12 10:35:35.619645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.935 [2024-12-12 10:35:35.619649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.619652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6700) on tqpair=0x764690 00:22:01.935 [2024-12-12 10:35:35.619661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.619664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x764690) 00:22:01.935 [2024-12-12 10:35:35.619674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.935 [2024-12-12 10:35:35.619690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6700, cid 4, qid 0 00:22:01.935 [2024-12-12 10:35:35.619763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.935 [2024-12-12 10:35:35.619769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.935 [2024-12-12 10:35:35.619772] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.619775] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x764690): datao=0, datal=8, cccid=4 00:22:01.935 [2024-12-12 10:35:35.619779] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7c6700) on tqpair(0x764690): expected_datao=0, payload_size=8 00:22:01.935 [2024-12-12 10:35:35.619782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.619788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.619791] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.660641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.935 [2024-12-12 10:35:35.660650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.935 [2024-12-12 10:35:35.660653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.935 [2024-12-12 10:35:35.660656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6700) on tqpair=0x764690 00:22:01.935 ===================================================== 00:22:01.935 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:01.935 ===================================================== 00:22:01.935 Controller Capabilities/Features 00:22:01.935 ================================ 00:22:01.935 Vendor ID: 0000 00:22:01.935 Subsystem Vendor ID: 0000 00:22:01.935 Serial Number: .................... 00:22:01.935 Model Number: ........................................ 00:22:01.935 Firmware Version: 25.01 00:22:01.935 Recommended Arb Burst: 0 00:22:01.935 IEEE OUI Identifier: 00 00 00 00:22:01.935 Multi-path I/O 00:22:01.935 May have multiple subsystem ports: No 00:22:01.935 May have multiple controllers: No 00:22:01.935 Associated with SR-IOV VF: No 00:22:01.935 Max Data Transfer Size: 131072 00:22:01.935 Max Number of Namespaces: 0 00:22:01.935 Max Number of I/O Queues: 1024 00:22:01.935 NVMe Specification Version (VS): 1.3 00:22:01.935 NVMe Specification Version (Identify): 1.3 00:22:01.935 Maximum Queue Entries: 128 00:22:01.935 Contiguous Queues Required: Yes 00:22:01.935 Arbitration Mechanisms Supported 00:22:01.935 Weighted Round Robin: Not Supported 00:22:01.935 Vendor Specific: Not Supported 00:22:01.935 Reset Timeout: 15000 ms 00:22:01.935 Doorbell Stride: 4 bytes 00:22:01.935 NVM Subsystem Reset: Not Supported 00:22:01.935 Command Sets Supported 00:22:01.935 NVM Command Set: Supported 00:22:01.935 Boot Partition: Not Supported 00:22:01.935 Memory Page Size Minimum: 4096 bytes 00:22:01.935 Memory Page Size Maximum: 4096 bytes 00:22:01.935 Persistent Memory Region: Not Supported 00:22:01.935 Optional Asynchronous Events Supported 00:22:01.935 Namespace Attribute Notices: Not Supported 00:22:01.935 Firmware Activation Notices: Not Supported 00:22:01.935 ANA Change Notices: Not Supported 00:22:01.935 PLE Aggregate Log Change Notices: Not Supported 00:22:01.935 LBA Status Info Alert Notices: Not Supported 00:22:01.936 EGE Aggregate Log Change Notices: Not Supported 00:22:01.936 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.936 Zone Descriptor Change Notices: Not Supported 00:22:01.936 Discovery Log Change Notices: Supported 00:22:01.936 Controller Attributes 00:22:01.936 128-bit Host Identifier: Not Supported 00:22:01.936 Non-Operational Permissive Mode: Not Supported 00:22:01.936 NVM Sets: Not Supported 00:22:01.936 Read Recovery Levels: Not Supported 00:22:01.936 Endurance Groups: Not Supported 00:22:01.936 Predictable Latency Mode: Not Supported 00:22:01.936 Traffic Based Keep ALive: Not Supported 00:22:01.936 Namespace Granularity: Not Supported 00:22:01.936 SQ Associations: Not Supported 00:22:01.936 UUID List: Not Supported 00:22:01.936 Multi-Domain Subsystem: Not Supported 00:22:01.936 Fixed Capacity Management: Not Supported 00:22:01.936 Variable Capacity Management: Not Supported 00:22:01.936 Delete Endurance Group: Not Supported 00:22:01.936 Delete NVM Set: Not Supported 00:22:01.936 Extended LBA Formats Supported: Not Supported 00:22:01.936 Flexible Data Placement Supported: Not Supported 00:22:01.936 00:22:01.936 Controller Memory Buffer Support 00:22:01.936 ================================ 00:22:01.936 Supported: No 00:22:01.936 00:22:01.936 Persistent Memory Region Support 00:22:01.936 ================================ 00:22:01.936 Supported: No 00:22:01.936 00:22:01.936 Admin Command Set Attributes 00:22:01.936 ============================ 00:22:01.936 Security Send/Receive: Not Supported 00:22:01.936 Format NVM: Not Supported 00:22:01.936 Firmware Activate/Download: Not Supported 00:22:01.936 Namespace Management: Not Supported 00:22:01.936 Device Self-Test: Not Supported 00:22:01.936 Directives: Not Supported 00:22:01.936 NVMe-MI: Not Supported 00:22:01.936 Virtualization Management: Not Supported 00:22:01.936 Doorbell Buffer Config: Not Supported 00:22:01.936 Get LBA Status Capability: Not Supported 00:22:01.936 Command & Feature Lockdown Capability: Not Supported 00:22:01.936 Abort Command Limit: 1 00:22:01.936 Async Event Request Limit: 4 00:22:01.936 Number of Firmware Slots: N/A 00:22:01.936 Firmware Slot 1 Read-Only: N/A 00:22:01.936 Firmware Activation Without Reset: N/A 00:22:01.936 Multiple Update Detection Support: N/A 00:22:01.936 Firmware Update Granularity: No Information Provided 00:22:01.936 Per-Namespace SMART Log: No 00:22:01.936 Asymmetric Namespace Access Log Page: Not Supported 00:22:01.936 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:01.936 Command Effects Log Page: Not Supported 00:22:01.936 Get Log Page Extended Data: Supported 00:22:01.936 Telemetry Log Pages: Not Supported 00:22:01.936 Persistent Event Log Pages: Not Supported 00:22:01.936 Supported Log Pages Log Page: May Support 00:22:01.936 Commands Supported & Effects Log Page: Not Supported 00:22:01.936 Feature Identifiers & Effects Log Page:May Support 00:22:01.936 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.936 Data Area 4 for Telemetry Log: Not Supported 00:22:01.936 Error Log Page Entries Supported: 128 00:22:01.936 Keep Alive: Not Supported 00:22:01.936 00:22:01.936 NVM Command Set Attributes 00:22:01.936 ========================== 00:22:01.936 Submission Queue Entry Size 00:22:01.936 Max: 1 00:22:01.936 Min: 1 00:22:01.936 Completion Queue Entry Size 00:22:01.936 Max: 1 00:22:01.936 Min: 1 00:22:01.936 Number of Namespaces: 0 00:22:01.936 Compare Command: Not Supported 00:22:01.936 Write Uncorrectable Command: Not Supported 00:22:01.936 Dataset Management Command: Not Supported 00:22:01.936 Write Zeroes Command: Not Supported 00:22:01.936 Set Features Save Field: Not Supported 00:22:01.936 Reservations: Not Supported 00:22:01.936 Timestamp: Not Supported 00:22:01.936 Copy: Not Supported 00:22:01.936 Volatile Write Cache: Not Present 00:22:01.936 Atomic Write Unit (Normal): 1 00:22:01.936 Atomic Write Unit (PFail): 1 00:22:01.936 Atomic Compare & Write Unit: 1 00:22:01.936 Fused Compare & Write: Supported 00:22:01.936 Scatter-Gather List 00:22:01.936 SGL Command Set: Supported 00:22:01.936 SGL Keyed: Supported 00:22:01.936 SGL Bit Bucket Descriptor: Not Supported 00:22:01.936 SGL Metadata Pointer: Not Supported 00:22:01.936 Oversized SGL: Not Supported 00:22:01.936 SGL Metadata Address: Not Supported 00:22:01.936 SGL Offset: Supported 00:22:01.936 Transport SGL Data Block: Not Supported 00:22:01.936 Replay Protected Memory Block: Not Supported 00:22:01.936 00:22:01.936 Firmware Slot Information 00:22:01.936 ========================= 00:22:01.936 Active slot: 0 00:22:01.936 00:22:01.936 00:22:01.936 Error Log 00:22:01.936 ========= 00:22:01.936 00:22:01.936 Active Namespaces 00:22:01.936 ================= 00:22:01.936 Discovery Log Page 00:22:01.936 ================== 00:22:01.936 Generation Counter: 2 00:22:01.936 Number of Records: 2 00:22:01.936 Record Format: 0 00:22:01.936 00:22:01.936 Discovery Log Entry 0 00:22:01.936 ---------------------- 00:22:01.936 Transport Type: 3 (TCP) 00:22:01.936 Address Family: 1 (IPv4) 00:22:01.936 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:01.936 Entry Flags: 00:22:01.936 Duplicate Returned Information: 1 00:22:01.936 Explicit Persistent Connection Support for Discovery: 1 00:22:01.936 Transport Requirements: 00:22:01.936 Secure Channel: Not Required 00:22:01.936 Port ID: 0 (0x0000) 00:22:01.936 Controller ID: 65535 (0xffff) 00:22:01.936 Admin Max SQ Size: 128 00:22:01.936 Transport Service Identifier: 4420 00:22:01.936 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:01.936 Transport Address: 10.0.0.2 00:22:01.936 Discovery Log Entry 1 00:22:01.936 ---------------------- 00:22:01.936 Transport Type: 3 (TCP) 00:22:01.936 Address Family: 1 (IPv4) 00:22:01.936 Subsystem Type: 2 (NVM Subsystem) 00:22:01.936 Entry Flags: 00:22:01.936 Duplicate Returned Information: 0 00:22:01.936 Explicit Persistent Connection Support for Discovery: 0 00:22:01.936 Transport Requirements: 00:22:01.936 Secure Channel: Not Required 00:22:01.936 Port ID: 0 (0x0000) 00:22:01.936 Controller ID: 65535 (0xffff) 00:22:01.936 Admin Max SQ Size: 128 00:22:01.936 Transport Service Identifier: 4420 00:22:01.936 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:01.936 Transport Address: 10.0.0.2 [2024-12-12 10:35:35.660740] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:01.936 [2024-12-12 10:35:35.660751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6100) on tqpair=0x764690 00:22:01.936 [2024-12-12 10:35:35.660757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.936 [2024-12-12 10:35:35.660761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6280) on tqpair=0x764690 00:22:01.936 [2024-12-12 10:35:35.660766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.936 [2024-12-12 10:35:35.660770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6400) on tqpair=0x764690 00:22:01.936 [2024-12-12 10:35:35.660773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.936 [2024-12-12 10:35:35.660778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.936 [2024-12-12 10:35:35.660781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.936 [2024-12-12 10:35:35.660789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.936 [2024-12-12 10:35:35.660793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.936 [2024-12-12 10:35:35.660796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.936 [2024-12-12 10:35:35.660802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.936 [2024-12-12 10:35:35.660815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.937 [2024-12-12 10:35:35.660883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.660888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.660891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.660894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.937 [2024-12-12 10:35:35.660900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.660903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.660908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.937 [2024-12-12 10:35:35.660913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.937 [2024-12-12 10:35:35.660926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.937 [2024-12-12 10:35:35.661000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.661006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.661009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.937 [2024-12-12 10:35:35.661016] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:01.937 [2024-12-12 10:35:35.661020] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:01.937 [2024-12-12 10:35:35.661028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.937 [2024-12-12 10:35:35.661040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.937 [2024-12-12 10:35:35.661048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.937 [2024-12-12 10:35:35.661114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.661119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.661122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.937 [2024-12-12 10:35:35.661134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.937 [2024-12-12 10:35:35.661146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.937 [2024-12-12 10:35:35.661155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.937 [2024-12-12 10:35:35.661215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.661220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.661223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.937 [2024-12-12 10:35:35.661235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.937 [2024-12-12 10:35:35.661246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.937 [2024-12-12 10:35:35.661255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.937 [2024-12-12 10:35:35.661317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.661322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.661326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.937 [2024-12-12 10:35:35.661339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.937 [2024-12-12 10:35:35.661351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.937 [2024-12-12 10:35:35.661361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.937 [2024-12-12 10:35:35.661424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.661429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.661432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.937 [2024-12-12 10:35:35.661443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.937 [2024-12-12 10:35:35.661454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.937 [2024-12-12 10:35:35.661463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.937 [2024-12-12 10:35:35.661521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.661527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.661530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.937 [2024-12-12 10:35:35.661540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.661547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.937 [2024-12-12 10:35:35.661552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.937 [2024-12-12 10:35:35.661561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.937 [2024-12-12 10:35:35.665580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.665588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.665591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.665594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.937 [2024-12-12 10:35:35.665604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.665607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.665610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x764690) 00:22:01.937 [2024-12-12 10:35:35.665616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.937 [2024-12-12 10:35:35.665626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7c6580, cid 3, qid 0 00:22:01.937 [2024-12-12 10:35:35.665753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.665758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.665761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.665764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7c6580) on tqpair=0x764690 00:22:01.937 [2024-12-12 10:35:35.665771] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:22:01.937 00:22:01.937 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:01.937 [2024-12-12 10:35:35.704665] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:22:01.937 [2024-12-12 10:35:35.704713] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596557 ] 00:22:01.937 [2024-12-12 10:35:35.743795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:01.937 [2024-12-12 10:35:35.743829] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:01.937 [2024-12-12 10:35:35.743834] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:01.937 [2024-12-12 10:35:35.743843] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:01.937 [2024-12-12 10:35:35.743850] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:01.937 [2024-12-12 10:35:35.747721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:01.937 [2024-12-12 10:35:35.747748] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2156690 0 00:22:01.937 [2024-12-12 10:35:35.754583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:01.937 [2024-12-12 10:35:35.754598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:01.937 [2024-12-12 10:35:35.754602] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:01.937 [2024-12-12 10:35:35.754605] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:01.937 [2024-12-12 10:35:35.754630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.754635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.754638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.937 [2024-12-12 10:35:35.754648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:01.937 [2024-12-12 10:35:35.754665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.937 [2024-12-12 10:35:35.762580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.937 [2024-12-12 10:35:35.762590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.937 [2024-12-12 10:35:35.762593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.937 [2024-12-12 10:35:35.762597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.937 [2024-12-12 10:35:35.762608] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:01.937 [2024-12-12 10:35:35.762614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:01.937 [2024-12-12 10:35:35.762619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:01.938 [2024-12-12 10:35:35.762627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.762631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.762634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.938 [2024-12-12 10:35:35.762640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.938 [2024-12-12 10:35:35.762655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.938 [2024-12-12 10:35:35.762767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.938 [2024-12-12 10:35:35.762774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.938 [2024-12-12 10:35:35.762776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.762780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.938 [2024-12-12 10:35:35.762784] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:01.938 [2024-12-12 10:35:35.762790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:01.938 [2024-12-12 10:35:35.762797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.762800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.762803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.938 [2024-12-12 10:35:35.762808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.938 [2024-12-12 10:35:35.762818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.938 [2024-12-12 10:35:35.762916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.938 [2024-12-12 10:35:35.762922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.938 [2024-12-12 10:35:35.762925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.762928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.938 [2024-12-12 10:35:35.762932] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:01.938 [2024-12-12 10:35:35.762939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:01.938 [2024-12-12 10:35:35.762945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.762948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.762951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.938 [2024-12-12 10:35:35.762957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.938 [2024-12-12 10:35:35.762966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.938 [2024-12-12 10:35:35.763067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.938 [2024-12-12 10:35:35.763073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.938 [2024-12-12 10:35:35.763076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.938 [2024-12-12 10:35:35.763084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:01.938 [2024-12-12 10:35:35.763092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.938 [2024-12-12 10:35:35.763104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.938 [2024-12-12 10:35:35.763113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.938 [2024-12-12 10:35:35.763181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.938 [2024-12-12 10:35:35.763186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.938 [2024-12-12 10:35:35.763191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.938 [2024-12-12 10:35:35.763198] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:01.938 [2024-12-12 10:35:35.763202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:01.938 [2024-12-12 10:35:35.763209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:01.938 [2024-12-12 10:35:35.763316] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:01.938 [2024-12-12 10:35:35.763320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:01.938 [2024-12-12 10:35:35.763327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.938 [2024-12-12 10:35:35.763338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.938 [2024-12-12 10:35:35.763348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.938 [2024-12-12 10:35:35.763413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.938 [2024-12-12 10:35:35.763419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.938 [2024-12-12 10:35:35.763422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.938 [2024-12-12 10:35:35.763429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:01.938 [2024-12-12 10:35:35.763437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.938 [2024-12-12 10:35:35.763449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.938 [2024-12-12 10:35:35.763458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.938 [2024-12-12 10:35:35.763565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.938 [2024-12-12 10:35:35.763577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.938 [2024-12-12 10:35:35.763581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.938 [2024-12-12 10:35:35.763588] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:01.938 [2024-12-12 10:35:35.763592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:01.938 [2024-12-12 10:35:35.763598] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:01.938 [2024-12-12 10:35:35.763605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:01.938 [2024-12-12 10:35:35.763612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.938 [2024-12-12 10:35:35.763622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.938 [2024-12-12 10:35:35.763633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.938 [2024-12-12 10:35:35.763735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.938 [2024-12-12 10:35:35.763741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.938 [2024-12-12 10:35:35.763743] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763746] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2156690): datao=0, datal=4096, cccid=0 00:22:01.938 [2024-12-12 10:35:35.763751] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b8100) on tqpair(0x2156690): expected_datao=0, payload_size=4096 00:22:01.938 [2024-12-12 10:35:35.763755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763783] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.763786] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.804658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.938 [2024-12-12 10:35:35.804668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.938 [2024-12-12 10:35:35.804671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.804674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.938 [2024-12-12 10:35:35.804681] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:01.938 [2024-12-12 10:35:35.804685] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:01.938 [2024-12-12 10:35:35.804689] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:01.938 [2024-12-12 10:35:35.804693] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:01.938 [2024-12-12 10:35:35.804697] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:01.938 [2024-12-12 10:35:35.804701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:01.938 [2024-12-12 10:35:35.804712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:01.938 [2024-12-12 10:35:35.804720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.804723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.804726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.938 [2024-12-12 10:35:35.804732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.938 [2024-12-12 10:35:35.804744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.938 [2024-12-12 10:35:35.804807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.938 [2024-12-12 10:35:35.804813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.938 [2024-12-12 10:35:35.804816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.938 [2024-12-12 10:35:35.804819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.938 [2024-12-12 10:35:35.804824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.804828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.804831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2156690) 00:22:01.939 [2024-12-12 10:35:35.804836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.939 [2024-12-12 10:35:35.804846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.804849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.804852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2156690) 00:22:01.939 [2024-12-12 10:35:35.804857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.939 [2024-12-12 10:35:35.804862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.804865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.804868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2156690) 00:22:01.939 [2024-12-12 10:35:35.804873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.939 [2024-12-12 10:35:35.804878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.804881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.804884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.939 [2024-12-12 10:35:35.804888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.939 [2024-12-12 10:35:35.804892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.804902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.804907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.804910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2156690) 00:22:01.939 [2024-12-12 10:35:35.804916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.939 [2024-12-12 10:35:35.804927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8100, cid 0, qid 0 00:22:01.939 [2024-12-12 10:35:35.804931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8280, cid 1, qid 0 00:22:01.939 [2024-12-12 10:35:35.804935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8400, cid 2, qid 0 00:22:01.939 [2024-12-12 10:35:35.804939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.939 [2024-12-12 10:35:35.804943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8700, cid 4, qid 0 00:22:01.939 [2024-12-12 10:35:35.805041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.939 [2024-12-12 10:35:35.805047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.939 [2024-12-12 10:35:35.805050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8700) on tqpair=0x2156690 00:22:01.939 [2024-12-12 10:35:35.805057] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:01.939 [2024-12-12 10:35:35.805062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2156690) 00:22:01.939 [2024-12-12 10:35:35.805097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.939 [2024-12-12 10:35:35.805107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8700, cid 4, qid 0 00:22:01.939 [2024-12-12 10:35:35.805210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.939 [2024-12-12 10:35:35.805216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.939 [2024-12-12 10:35:35.805219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8700) on tqpair=0x2156690 00:22:01.939 [2024-12-12 10:35:35.805270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2156690) 00:22:01.939 [2024-12-12 10:35:35.805295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.939 [2024-12-12 10:35:35.805304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8700, cid 4, qid 0 00:22:01.939 [2024-12-12 10:35:35.805379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.939 [2024-12-12 10:35:35.805384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.939 [2024-12-12 10:35:35.805387] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805390] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2156690): datao=0, datal=4096, cccid=4 00:22:01.939 [2024-12-12 10:35:35.805394] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b8700) on tqpair(0x2156690): expected_datao=0, payload_size=4096 00:22:01.939 [2024-12-12 10:35:35.805398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805403] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805407] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.939 [2024-12-12 10:35:35.805466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.939 [2024-12-12 10:35:35.805469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8700) on tqpair=0x2156690 00:22:01.939 [2024-12-12 10:35:35.805482] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:01.939 [2024-12-12 10:35:35.805494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2156690) 00:22:01.939 [2024-12-12 10:35:35.805516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.939 [2024-12-12 10:35:35.805526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8700, cid 4, qid 0 00:22:01.939 [2024-12-12 10:35:35.805653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.939 [2024-12-12 10:35:35.805661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.939 [2024-12-12 10:35:35.805664] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805667] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2156690): datao=0, datal=4096, cccid=4 00:22:01.939 [2024-12-12 10:35:35.805671] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b8700) on tqpair(0x2156690): expected_datao=0, payload_size=4096 00:22:01.939 [2024-12-12 10:35:35.805674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805680] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805683] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.939 [2024-12-12 10:35:35.805720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.939 [2024-12-12 10:35:35.805723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8700) on tqpair=0x2156690 00:22:01.939 [2024-12-12 10:35:35.805737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2156690) 00:22:01.939 [2024-12-12 10:35:35.805760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.939 [2024-12-12 10:35:35.805770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8700, cid 4, qid 0 00:22:01.939 [2024-12-12 10:35:35.805848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.939 [2024-12-12 10:35:35.805854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.939 [2024-12-12 10:35:35.805857] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805860] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2156690): datao=0, datal=4096, cccid=4 00:22:01.939 [2024-12-12 10:35:35.805864] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b8700) on tqpair(0x2156690): expected_datao=0, payload_size=4096 00:22:01.939 [2024-12-12 10:35:35.805868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805873] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805876] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.939 [2024-12-12 10:35:35.805920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.939 [2024-12-12 10:35:35.805923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.939 [2024-12-12 10:35:35.805926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8700) on tqpair=0x2156690 00:22:01.939 [2024-12-12 10:35:35.805932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:01.939 [2024-12-12 10:35:35.805952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:01.940 [2024-12-12 10:35:35.805956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:01.940 [2024-12-12 10:35:35.805963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:01.940 [2024-12-12 10:35:35.805967] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:01.940 [2024-12-12 10:35:35.805971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:01.940 [2024-12-12 10:35:35.805976] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:01.940 [2024-12-12 10:35:35.805988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.805991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2156690) 00:22:01.940 [2024-12-12 10:35:35.805997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.940 [2024-12-12 10:35:35.806002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2156690) 00:22:01.940 [2024-12-12 10:35:35.806013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.940 [2024-12-12 10:35:35.806025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8700, cid 4, qid 0 00:22:01.940 [2024-12-12 10:35:35.806030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8880, cid 5, qid 0 00:22:01.940 [2024-12-12 10:35:35.806146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.940 [2024-12-12 10:35:35.806152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.940 [2024-12-12 10:35:35.806155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8700) on tqpair=0x2156690 00:22:01.940 [2024-12-12 10:35:35.806164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.940 [2024-12-12 10:35:35.806168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.940 [2024-12-12 10:35:35.806171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8880) on tqpair=0x2156690 00:22:01.940 [2024-12-12 10:35:35.806182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2156690) 00:22:01.940 [2024-12-12 10:35:35.806191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.940 [2024-12-12 10:35:35.806200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8880, cid 5, qid 0 00:22:01.940 [2024-12-12 10:35:35.806296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.940 [2024-12-12 10:35:35.806302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.940 [2024-12-12 10:35:35.806304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8880) on tqpair=0x2156690 00:22:01.940 [2024-12-12 10:35:35.806315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2156690) 00:22:01.940 [2024-12-12 10:35:35.806324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.940 [2024-12-12 10:35:35.806333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8880, cid 5, qid 0 00:22:01.940 [2024-12-12 10:35:35.806391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.940 [2024-12-12 10:35:35.806396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.940 [2024-12-12 10:35:35.806399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8880) on tqpair=0x2156690 00:22:01.940 [2024-12-12 10:35:35.806411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2156690) 00:22:01.940 [2024-12-12 10:35:35.806420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.940 [2024-12-12 10:35:35.806429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8880, cid 5, qid 0 00:22:01.940 [2024-12-12 10:35:35.806498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.940 [2024-12-12 10:35:35.806503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.940 [2024-12-12 10:35:35.806506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8880) on tqpair=0x2156690 00:22:01.940 [2024-12-12 10:35:35.806521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2156690) 00:22:01.940 [2024-12-12 10:35:35.806530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.940 [2024-12-12 10:35:35.806536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2156690) 00:22:01.940 [2024-12-12 10:35:35.806544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.940 [2024-12-12 10:35:35.806550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2156690) 00:22:01.940 [2024-12-12 10:35:35.806558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.940 [2024-12-12 10:35:35.806565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.806568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2156690) 00:22:01.940 [2024-12-12 10:35:35.810580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.940 [2024-12-12 10:35:35.810593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8880, cid 5, qid 0 00:22:01.940 [2024-12-12 10:35:35.810597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8700, cid 4, qid 0 00:22:01.940 [2024-12-12 10:35:35.810601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8a00, cid 6, qid 0 00:22:01.940 [2024-12-12 10:35:35.810605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8b80, cid 7, qid 0 00:22:01.940 [2024-12-12 10:35:35.810750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.940 [2024-12-12 10:35:35.810756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.940 [2024-12-12 10:35:35.810758] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810761] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2156690): datao=0, datal=8192, cccid=5 00:22:01.940 [2024-12-12 10:35:35.810766] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b8880) on tqpair(0x2156690): expected_datao=0, payload_size=8192 00:22:01.940 [2024-12-12 10:35:35.810771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810820] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810823] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.940 [2024-12-12 10:35:35.810833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.940 [2024-12-12 10:35:35.810836] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810839] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2156690): datao=0, datal=512, cccid=4 00:22:01.940 [2024-12-12 10:35:35.810842] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b8700) on tqpair(0x2156690): expected_datao=0, payload_size=512 00:22:01.940 [2024-12-12 10:35:35.810846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810851] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810854] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.940 [2024-12-12 10:35:35.810863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.940 [2024-12-12 10:35:35.810866] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810869] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2156690): datao=0, datal=512, cccid=6 00:22:01.940 [2024-12-12 10:35:35.810873] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b8a00) on tqpair(0x2156690): expected_datao=0, payload_size=512 00:22:01.940 [2024-12-12 10:35:35.810877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810882] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810885] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.940 [2024-12-12 10:35:35.810894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.940 [2024-12-12 10:35:35.810897] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.940 [2024-12-12 10:35:35.810900] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2156690): datao=0, datal=4096, cccid=7 00:22:01.940 [2024-12-12 10:35:35.810903] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21b8b80) on tqpair(0x2156690): expected_datao=0, payload_size=4096 00:22:01.941 [2024-12-12 10:35:35.810907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.941 [2024-12-12 10:35:35.810917] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.941 [2024-12-12 10:35:35.810921] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.941 [2024-12-12 10:35:35.810927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.941 [2024-12-12 10:35:35.810932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.941 [2024-12-12 10:35:35.810934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.941 [2024-12-12 10:35:35.810938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8880) on tqpair=0x2156690 00:22:01.941 [2024-12-12 10:35:35.810949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.941 [2024-12-12 10:35:35.810954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.941 [2024-12-12 10:35:35.810957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.941 [2024-12-12 10:35:35.810961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8700) on tqpair=0x2156690 00:22:01.941 [2024-12-12 10:35:35.810968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.941 [2024-12-12 10:35:35.810973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.941 [2024-12-12 10:35:35.810976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.941 [2024-12-12 10:35:35.810979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8a00) on tqpair=0x2156690 00:22:01.941 [2024-12-12 10:35:35.810986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.941 [2024-12-12 10:35:35.810991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.941 [2024-12-12 10:35:35.810994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.941 [2024-12-12 10:35:35.810997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8b80) on tqpair=0x2156690 00:22:01.941 ===================================================== 00:22:01.941 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.941 ===================================================== 00:22:01.941 Controller Capabilities/Features 00:22:01.941 ================================ 00:22:01.941 Vendor ID: 8086 00:22:01.941 Subsystem Vendor ID: 8086 00:22:01.941 Serial Number: SPDK00000000000001 00:22:01.941 Model Number: SPDK bdev Controller 00:22:01.941 Firmware Version: 25.01 00:22:01.941 Recommended Arb Burst: 6 00:22:01.941 IEEE OUI Identifier: e4 d2 5c 00:22:01.941 Multi-path I/O 00:22:01.941 May have multiple subsystem ports: Yes 00:22:01.941 May have multiple controllers: Yes 00:22:01.941 Associated with SR-IOV VF: No 00:22:01.941 Max Data Transfer Size: 131072 00:22:01.941 Max Number of Namespaces: 32 00:22:01.941 Max Number of I/O Queues: 127 00:22:01.941 NVMe Specification Version (VS): 1.3 00:22:01.941 NVMe Specification Version (Identify): 1.3 00:22:01.941 Maximum Queue Entries: 128 00:22:01.941 Contiguous Queues Required: Yes 00:22:01.941 Arbitration Mechanisms Supported 00:22:01.941 Weighted Round Robin: Not Supported 00:22:01.941 Vendor Specific: Not Supported 00:22:01.941 Reset Timeout: 15000 ms 00:22:01.941 Doorbell Stride: 4 bytes 00:22:01.941 NVM Subsystem Reset: Not Supported 00:22:01.941 Command Sets Supported 00:22:01.941 NVM Command Set: Supported 00:22:01.941 Boot Partition: Not Supported 00:22:01.941 Memory Page Size Minimum: 4096 bytes 00:22:01.941 Memory Page Size Maximum: 4096 bytes 00:22:01.941 Persistent Memory Region: Not Supported 00:22:01.941 Optional Asynchronous Events Supported 00:22:01.941 Namespace Attribute Notices: Supported 00:22:01.941 Firmware Activation Notices: Not Supported 00:22:01.941 ANA Change Notices: Not Supported 00:22:01.941 PLE Aggregate Log Change Notices: Not Supported 00:22:01.941 LBA Status Info Alert Notices: Not Supported 00:22:01.941 EGE Aggregate Log Change Notices: Not Supported 00:22:01.941 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.941 Zone Descriptor Change Notices: Not Supported 00:22:01.941 Discovery Log Change Notices: Not Supported 00:22:01.941 Controller Attributes 00:22:01.941 128-bit Host Identifier: Supported 00:22:01.941 Non-Operational Permissive Mode: Not Supported 00:22:01.941 NVM Sets: Not Supported 00:22:01.941 Read Recovery Levels: Not Supported 00:22:01.941 Endurance Groups: Not Supported 00:22:01.941 Predictable Latency Mode: Not Supported 00:22:01.941 Traffic Based Keep ALive: Not Supported 00:22:01.941 Namespace Granularity: Not Supported 00:22:01.941 SQ Associations: Not Supported 00:22:01.941 UUID List: Not Supported 00:22:01.941 Multi-Domain Subsystem: Not Supported 00:22:01.941 Fixed Capacity Management: Not Supported 00:22:01.941 Variable Capacity Management: Not Supported 00:22:01.941 Delete Endurance Group: Not Supported 00:22:01.941 Delete NVM Set: Not Supported 00:22:01.941 Extended LBA Formats Supported: Not Supported 00:22:01.941 Flexible Data Placement Supported: Not Supported 00:22:01.941 00:22:01.941 Controller Memory Buffer Support 00:22:01.941 ================================ 00:22:01.941 Supported: No 00:22:01.941 00:22:01.941 Persistent Memory Region Support 00:22:01.941 ================================ 00:22:01.941 Supported: No 00:22:01.941 00:22:01.941 Admin Command Set Attributes 00:22:01.941 ============================ 00:22:01.941 Security Send/Receive: Not Supported 00:22:01.941 Format NVM: Not Supported 00:22:01.941 Firmware Activate/Download: Not Supported 00:22:01.941 Namespace Management: Not Supported 00:22:01.941 Device Self-Test: Not Supported 00:22:01.941 Directives: Not Supported 00:22:01.941 NVMe-MI: Not Supported 00:22:01.941 Virtualization Management: Not Supported 00:22:01.941 Doorbell Buffer Config: Not Supported 00:22:01.941 Get LBA Status Capability: Not Supported 00:22:01.941 Command & Feature Lockdown Capability: Not Supported 00:22:01.941 Abort Command Limit: 4 00:22:01.941 Async Event Request Limit: 4 00:22:01.941 Number of Firmware Slots: N/A 00:22:01.941 Firmware Slot 1 Read-Only: N/A 00:22:01.941 Firmware Activation Without Reset: N/A 00:22:01.941 Multiple Update Detection Support: N/A 00:22:01.941 Firmware Update Granularity: No Information Provided 00:22:01.941 Per-Namespace SMART Log: No 00:22:01.941 Asymmetric Namespace Access Log Page: Not Supported 00:22:01.941 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:01.941 Command Effects Log Page: Supported 00:22:01.941 Get Log Page Extended Data: Supported 00:22:01.941 Telemetry Log Pages: Not Supported 00:22:01.941 Persistent Event Log Pages: Not Supported 00:22:01.941 Supported Log Pages Log Page: May Support 00:22:01.941 Commands Supported & Effects Log Page: Not Supported 00:22:01.941 Feature Identifiers & Effects Log Page:May Support 00:22:01.941 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.941 Data Area 4 for Telemetry Log: Not Supported 00:22:01.941 Error Log Page Entries Supported: 128 00:22:01.941 Keep Alive: Supported 00:22:01.941 Keep Alive Granularity: 10000 ms 00:22:01.941 00:22:01.941 NVM Command Set Attributes 00:22:01.941 ========================== 00:22:01.941 Submission Queue Entry Size 00:22:01.941 Max: 64 00:22:01.941 Min: 64 00:22:01.941 Completion Queue Entry Size 00:22:01.941 Max: 16 00:22:01.941 Min: 16 00:22:01.941 Number of Namespaces: 32 00:22:01.941 Compare Command: Supported 00:22:01.941 Write Uncorrectable Command: Not Supported 00:22:01.941 Dataset Management Command: Supported 00:22:01.941 Write Zeroes Command: Supported 00:22:01.941 Set Features Save Field: Not Supported 00:22:01.941 Reservations: Supported 00:22:01.941 Timestamp: Not Supported 00:22:01.941 Copy: Supported 00:22:01.941 Volatile Write Cache: Present 00:22:01.941 Atomic Write Unit (Normal): 1 00:22:01.941 Atomic Write Unit (PFail): 1 00:22:01.941 Atomic Compare & Write Unit: 1 00:22:01.941 Fused Compare & Write: Supported 00:22:01.941 Scatter-Gather List 00:22:01.941 SGL Command Set: Supported 00:22:01.941 SGL Keyed: Supported 00:22:01.941 SGL Bit Bucket Descriptor: Not Supported 00:22:01.941 SGL Metadata Pointer: Not Supported 00:22:01.941 Oversized SGL: Not Supported 00:22:01.941 SGL Metadata Address: Not Supported 00:22:01.941 SGL Offset: Supported 00:22:01.941 Transport SGL Data Block: Not Supported 00:22:01.941 Replay Protected Memory Block: Not Supported 00:22:01.941 00:22:01.941 Firmware Slot Information 00:22:01.941 ========================= 00:22:01.941 Active slot: 1 00:22:01.941 Slot 1 Firmware Revision: 25.01 00:22:01.941 00:22:01.941 00:22:01.941 Commands Supported and Effects 00:22:01.941 ============================== 00:22:01.941 Admin Commands 00:22:01.941 -------------- 00:22:01.941 Get Log Page (02h): Supported 00:22:01.941 Identify (06h): Supported 00:22:01.941 Abort (08h): Supported 00:22:01.941 Set Features (09h): Supported 00:22:01.941 Get Features (0Ah): Supported 00:22:01.941 Asynchronous Event Request (0Ch): Supported 00:22:01.941 Keep Alive (18h): Supported 00:22:01.941 I/O Commands 00:22:01.941 ------------ 00:22:01.941 Flush (00h): Supported LBA-Change 00:22:01.941 Write (01h): Supported LBA-Change 00:22:01.941 Read (02h): Supported 00:22:01.941 Compare (05h): Supported 00:22:01.941 Write Zeroes (08h): Supported LBA-Change 00:22:01.941 Dataset Management (09h): Supported LBA-Change 00:22:01.941 Copy (19h): Supported LBA-Change 00:22:01.941 00:22:01.941 Error Log 00:22:01.941 ========= 00:22:01.941 00:22:01.941 Arbitration 00:22:01.941 =========== 00:22:01.941 Arbitration Burst: 1 00:22:01.941 00:22:01.941 Power Management 00:22:01.941 ================ 00:22:01.941 Number of Power States: 1 00:22:01.942 Current Power State: Power State #0 00:22:01.942 Power State #0: 00:22:01.942 Max Power: 0.00 W 00:22:01.942 Non-Operational State: Operational 00:22:01.942 Entry Latency: Not Reported 00:22:01.942 Exit Latency: Not Reported 00:22:01.942 Relative Read Throughput: 0 00:22:01.942 Relative Read Latency: 0 00:22:01.942 Relative Write Throughput: 0 00:22:01.942 Relative Write Latency: 0 00:22:01.942 Idle Power: Not Reported 00:22:01.942 Active Power: Not Reported 00:22:01.942 Non-Operational Permissive Mode: Not Supported 00:22:01.942 00:22:01.942 Health Information 00:22:01.942 ================== 00:22:01.942 Critical Warnings: 00:22:01.942 Available Spare Space: OK 00:22:01.942 Temperature: OK 00:22:01.942 Device Reliability: OK 00:22:01.942 Read Only: No 00:22:01.942 Volatile Memory Backup: OK 00:22:01.942 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:01.942 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:01.942 Available Spare: 0% 00:22:01.942 Available Spare Threshold: 0% 00:22:01.942 Life Percentage Used:[2024-12-12 10:35:35.811074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2156690) 00:22:01.942 [2024-12-12 10:35:35.811084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.942 [2024-12-12 10:35:35.811095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8b80, cid 7, qid 0 00:22:01.942 [2024-12-12 10:35:35.811178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.942 [2024-12-12 10:35:35.811183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.942 [2024-12-12 10:35:35.811187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8b80) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811217] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:01.942 [2024-12-12 10:35:35.811225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8100) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.942 [2024-12-12 10:35:35.811234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8280) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.942 [2024-12-12 10:35:35.811242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8400) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.942 [2024-12-12 10:35:35.811250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.942 [2024-12-12 10:35:35.811260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.942 [2024-12-12 10:35:35.811272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.942 [2024-12-12 10:35:35.811283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.942 [2024-12-12 10:35:35.811357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.942 [2024-12-12 10:35:35.811363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.942 [2024-12-12 10:35:35.811366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.942 [2024-12-12 10:35:35.811386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.942 [2024-12-12 10:35:35.811399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.942 [2024-12-12 10:35:35.811478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.942 [2024-12-12 10:35:35.811484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.942 [2024-12-12 10:35:35.811487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811494] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:01.942 [2024-12-12 10:35:35.811498] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:01.942 [2024-12-12 10:35:35.811506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.942 [2024-12-12 10:35:35.811518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.942 [2024-12-12 10:35:35.811527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.942 [2024-12-12 10:35:35.811596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.942 [2024-12-12 10:35:35.811602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.942 [2024-12-12 10:35:35.811605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.942 [2024-12-12 10:35:35.811628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.942 [2024-12-12 10:35:35.811637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.942 [2024-12-12 10:35:35.811713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.942 [2024-12-12 10:35:35.811718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.942 [2024-12-12 10:35:35.811721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.942 [2024-12-12 10:35:35.811744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.942 [2024-12-12 10:35:35.811753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.942 [2024-12-12 10:35:35.811812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.942 [2024-12-12 10:35:35.811818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.942 [2024-12-12 10:35:35.811821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.942 [2024-12-12 10:35:35.811845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.942 [2024-12-12 10:35:35.811854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.942 [2024-12-12 10:35:35.811930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.942 [2024-12-12 10:35:35.811935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.942 [2024-12-12 10:35:35.811938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.811949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.811956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.942 [2024-12-12 10:35:35.811961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.942 [2024-12-12 10:35:35.811970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.942 [2024-12-12 10:35:35.812028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.942 [2024-12-12 10:35:35.812033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.942 [2024-12-12 10:35:35.812036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.812039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.812047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.812050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.812053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.942 [2024-12-12 10:35:35.812058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.942 [2024-12-12 10:35:35.812067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.942 [2024-12-12 10:35:35.812124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.942 [2024-12-12 10:35:35.812129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.942 [2024-12-12 10:35:35.812132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.812135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.942 [2024-12-12 10:35:35.812143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.812147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.942 [2024-12-12 10:35:35.812150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.812155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.812164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.812224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.812229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.812232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.812243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.812255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.812267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.812342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.812347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.812350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.812361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.812373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.812382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.812440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.812445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.812448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.812459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.812471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.812480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.812537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.812542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.812545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.812556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.812574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.812584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.812654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.812660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.812663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.812673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.812685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.812695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.812771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.812777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.812780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.812791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.812803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.812812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.812871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.812877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.812880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.812891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.812903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.812912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.812969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.812975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.812978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.812989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.812995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.813001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.813010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.813085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.813090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.813093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.813104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.813116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.813125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.813184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.813191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.813195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.813206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.813217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.813227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.813291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.813296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.813299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.813311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.813323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.813332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.813393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.943 [2024-12-12 10:35:35.813399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.943 [2024-12-12 10:35:35.813402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.943 [2024-12-12 10:35:35.813413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.943 [2024-12-12 10:35:35.813419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.943 [2024-12-12 10:35:35.813425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.943 [2024-12-12 10:35:35.813434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.943 [2024-12-12 10:35:35.813514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.813519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.813522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.813533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.813545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.813554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.813629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.813635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.813639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.813650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.813662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.813671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.813732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.813737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.813740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.813752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.813763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.813772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.813834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.813839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.813842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.813853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.813865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.813874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.813929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.813935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.813938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.813949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.813955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.813961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.813970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.814040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.814045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.814048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.814061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.814073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.814082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.814140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.814145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.814148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.814159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.814171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.814180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.814248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.814254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.814256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.814267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.814279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.814288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.814363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.814368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.814371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.814383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.814395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.814404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.814470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.814475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.814478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.814492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.814498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.814504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.814514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.818579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.818587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.818590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.818593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.818602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.818606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.818609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2156690) 00:22:01.944 [2024-12-12 10:35:35.818614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.944 [2024-12-12 10:35:35.818625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21b8580, cid 3, qid 0 00:22:01.944 [2024-12-12 10:35:35.818695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.944 [2024-12-12 10:35:35.818701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.944 [2024-12-12 10:35:35.818704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.944 [2024-12-12 10:35:35.818707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21b8580) on tqpair=0x2156690 00:22:01.944 [2024-12-12 10:35:35.818713] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:22:01.944 0% 00:22:01.944 Data Units Read: 0 00:22:01.944 Data Units Written: 0 00:22:01.944 Host Read Commands: 0 00:22:01.944 Host Write Commands: 0 00:22:01.944 Controller Busy Time: 0 minutes 00:22:01.944 Power Cycles: 0 00:22:01.944 Power On Hours: 0 hours 00:22:01.944 Unsafe Shutdowns: 0 00:22:01.944 Unrecoverable Media Errors: 0 00:22:01.944 Lifetime Error Log Entries: 0 00:22:01.944 Warning Temperature Time: 0 minutes 00:22:01.944 Critical Temperature Time: 0 minutes 00:22:01.944 00:22:01.944 Number of Queues 00:22:01.944 ================ 00:22:01.944 Number of I/O Submission Queues: 127 00:22:01.945 Number of I/O Completion Queues: 127 00:22:01.945 00:22:01.945 Active Namespaces 00:22:01.945 ================= 00:22:01.945 Namespace ID:1 00:22:01.945 Error Recovery Timeout: Unlimited 00:22:01.945 Command Set Identifier: NVM (00h) 00:22:01.945 Deallocate: Supported 00:22:01.945 Deallocated/Unwritten Error: Not Supported 00:22:01.945 Deallocated Read Value: Unknown 00:22:01.945 Deallocate in Write Zeroes: Not Supported 00:22:01.945 Deallocated Guard Field: 0xFFFF 00:22:01.945 Flush: Supported 00:22:01.945 Reservation: Supported 00:22:01.945 Namespace Sharing Capabilities: Multiple Controllers 00:22:01.945 Size (in LBAs): 131072 (0GiB) 00:22:01.945 Capacity (in LBAs): 131072 (0GiB) 00:22:01.945 Utilization (in LBAs): 131072 (0GiB) 00:22:01.945 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:01.945 EUI64: ABCDEF0123456789 00:22:01.945 UUID: d8f99c3f-44a1-421e-bb31-ceeb6201a611 00:22:01.945 Thin Provisioning: Not Supported 00:22:01.945 Per-NS Atomic Units: Yes 00:22:01.945 Atomic Boundary Size (Normal): 0 00:22:01.945 Atomic Boundary Size (PFail): 0 00:22:01.945 Atomic Boundary Offset: 0 00:22:01.945 Maximum Single Source Range Length: 65535 00:22:01.945 Maximum Copy Length: 65535 00:22:01.945 Maximum Source Range Count: 1 00:22:01.945 NGUID/EUI64 Never Reused: No 00:22:01.945 Namespace Write Protected: No 00:22:01.945 Number of LBA Formats: 1 00:22:01.945 Current LBA Format: LBA Format #00 00:22:01.945 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:01.945 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.945 rmmod nvme_tcp 00:22:01.945 rmmod nvme_fabrics 00:22:01.945 rmmod nvme_keyring 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1596464 ']' 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1596464 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1596464 ']' 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1596464 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.945 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596464 00:22:02.204 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.204 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.204 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596464' 00:22:02.204 killing process with pid 1596464 00:22:02.204 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1596464 00:22:02.204 10:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1596464 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.204 10:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.740 00:22:04.740 real 0m9.282s 00:22:04.740 user 0m5.715s 00:22:04.740 sys 0m4.763s 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:04.740 ************************************ 00:22:04.740 END TEST nvmf_identify 00:22:04.740 ************************************ 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.740 ************************************ 00:22:04.740 START TEST nvmf_perf 00:22:04.740 ************************************ 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:04.740 * Looking for test storage... 00:22:04.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.740 --rc genhtml_branch_coverage=1 00:22:04.740 --rc genhtml_function_coverage=1 00:22:04.740 --rc genhtml_legend=1 00:22:04.740 --rc geninfo_all_blocks=1 00:22:04.740 --rc geninfo_unexecuted_blocks=1 00:22:04.740 00:22:04.740 ' 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.740 --rc genhtml_branch_coverage=1 00:22:04.740 --rc genhtml_function_coverage=1 00:22:04.740 --rc genhtml_legend=1 00:22:04.740 --rc geninfo_all_blocks=1 00:22:04.740 --rc geninfo_unexecuted_blocks=1 00:22:04.740 00:22:04.740 ' 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.740 --rc genhtml_branch_coverage=1 00:22:04.740 --rc genhtml_function_coverage=1 00:22:04.740 --rc genhtml_legend=1 00:22:04.740 --rc geninfo_all_blocks=1 00:22:04.740 --rc geninfo_unexecuted_blocks=1 00:22:04.740 00:22:04.740 ' 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.740 --rc genhtml_branch_coverage=1 00:22:04.740 --rc genhtml_function_coverage=1 00:22:04.740 --rc genhtml_legend=1 00:22:04.740 --rc geninfo_all_blocks=1 00:22:04.740 --rc geninfo_unexecuted_blocks=1 00:22:04.740 00:22:04.740 ' 00:22:04.740 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.741 10:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:10.094 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:10.094 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:10.094 Found net devices under 0000:af:00.0: cvl_0_0 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:10.094 Found net devices under 0000:af:00.1: cvl_0_1 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.094 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:22:10.353 00:22:10.353 --- 10.0.0.2 ping statistics --- 00:22:10.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.353 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:22:10.353 00:22:10.353 --- 10.0.0.1 ping statistics --- 00:22:10.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.353 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.353 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1600142 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1600142 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1600142 ']' 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.611 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.611 [2024-12-12 10:35:44.432010] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:22:10.611 [2024-12-12 10:35:44.432054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.611 [2024-12-12 10:35:44.506602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.611 [2024-12-12 10:35:44.546968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.612 [2024-12-12 10:35:44.547005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.612 [2024-12-12 10:35:44.547013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.612 [2024-12-12 10:35:44.547019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.612 [2024-12-12 10:35:44.547025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.612 [2024-12-12 10:35:44.548537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.612 [2024-12-12 10:35:44.548645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.612 [2024-12-12 10:35:44.548679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.612 [2024-12-12 10:35:44.548680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.869 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.869 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:10.869 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.869 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.869 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.869 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.869 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:10.869 10:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:14.147 10:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:14.147 10:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:14.147 10:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:14.147 10:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:14.147 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:14.147 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:14.147 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:14.147 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:14.147 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:14.404 [2024-12-12 10:35:48.334690] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.404 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:14.662 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:14.662 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:14.921 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:14.921 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:15.178 10:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.178 [2024-12-12 10:35:49.141680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.178 10:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:15.435 10:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:15.435 10:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:15.435 10:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:15.435 10:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:16.811 Initializing NVMe Controllers 00:22:16.811 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:16.811 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:16.811 Initialization complete. Launching workers. 00:22:16.811 ======================================================== 00:22:16.811 Latency(us) 00:22:16.811 Device Information : IOPS MiB/s Average min max 00:22:16.811 PCIE (0000:5e:00.0) NSID 1 from core 0: 97361.80 380.32 328.09 34.91 4561.74 00:22:16.811 ======================================================== 00:22:16.811 Total : 97361.80 380.32 328.09 34.91 4561.74 00:22:16.811 00:22:16.811 10:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:18.183 Initializing NVMe Controllers 00:22:18.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:18.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:18.183 Initialization complete. Launching workers. 00:22:18.183 ======================================================== 00:22:18.183 Latency(us) 00:22:18.183 Device Information : IOPS MiB/s Average min max 00:22:18.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 191.32 0.75 5275.69 106.43 45802.86 00:22:18.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.80 0.22 17732.03 7143.84 47886.92 00:22:18.183 ======================================================== 00:22:18.183 Total : 248.12 0.97 8127.14 106.43 47886.92 00:22:18.183 00:22:18.184 10:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:19.556 Initializing NVMe Controllers 00:22:19.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:19.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:19.556 Initialization complete. Launching workers. 00:22:19.556 ======================================================== 00:22:19.556 Latency(us) 00:22:19.556 Device Information : IOPS MiB/s Average min max 00:22:19.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11247.06 43.93 2854.16 400.59 6313.27 00:22:19.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3764.68 14.71 8535.72 5256.25 15962.83 00:22:19.556 ======================================================== 00:22:19.556 Total : 15011.74 58.64 4278.99 400.59 15962.83 00:22:19.556 00:22:19.556 10:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:19.556 10:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:19.556 10:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:22.085 Initializing NVMe Controllers 00:22:22.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.085 Controller IO queue size 128, less than required. 00:22:22.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.085 Controller IO queue size 128, less than required. 00:22:22.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:22.085 Initialization complete. Launching workers. 00:22:22.085 ======================================================== 00:22:22.085 Latency(us) 00:22:22.085 Device Information : IOPS MiB/s Average min max 00:22:22.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1828.42 457.10 70773.54 45140.28 104509.63 00:22:22.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.47 151.62 222331.66 81203.60 343459.44 00:22:22.085 ======================================================== 00:22:22.085 Total : 2434.89 608.72 108523.03 45140.28 343459.44 00:22:22.085 00:22:22.085 10:35:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:22.343 No valid NVMe controllers or AIO or URING devices found 00:22:22.343 Initializing NVMe Controllers 00:22:22.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.343 Controller IO queue size 128, less than required. 00:22:22.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.343 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:22.343 Controller IO queue size 128, less than required. 00:22:22.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.343 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:22.343 WARNING: Some requested NVMe devices were skipped 00:22:22.343 10:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:24.872 Initializing NVMe Controllers 00:22:24.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:24.872 Controller IO queue size 128, less than required. 00:22:24.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:24.872 Controller IO queue size 128, less than required. 00:22:24.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:24.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:24.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:24.872 Initialization complete. Launching workers. 00:22:24.872 00:22:24.872 ==================== 00:22:24.872 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:24.872 TCP transport: 00:22:24.872 polls: 11459 00:22:24.872 idle_polls: 7445 00:22:24.872 sock_completions: 4014 00:22:24.872 nvme_completions: 6819 00:22:24.872 submitted_requests: 10152 00:22:24.872 queued_requests: 1 00:22:24.872 00:22:24.872 ==================== 00:22:24.872 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:24.872 TCP transport: 00:22:24.872 polls: 15588 00:22:24.872 idle_polls: 12079 00:22:24.872 sock_completions: 3509 00:22:24.872 nvme_completions: 6285 00:22:24.872 submitted_requests: 9460 00:22:24.872 queued_requests: 1 00:22:24.872 ======================================================== 00:22:24.872 Latency(us) 00:22:24.872 Device Information : IOPS MiB/s Average min max 00:22:24.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1701.25 425.31 77119.12 53530.54 127553.13 00:22:24.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1568.00 392.00 82320.78 40507.64 127295.59 00:22:24.872 ======================================================== 00:22:24.872 Total : 3269.25 817.31 79613.94 40507.64 127553.13 00:22:24.872 00:22:24.872 10:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:24.872 10:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.131 rmmod nvme_tcp 00:22:25.131 rmmod nvme_fabrics 00:22:25.131 rmmod nvme_keyring 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1600142 ']' 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1600142 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1600142 ']' 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1600142 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.131 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1600142 00:22:25.403 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.403 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.403 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1600142' 00:22:25.403 killing process with pid 1600142 00:22:25.403 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1600142 00:22:25.403 10:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1600142 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.778 10:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.313 10:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:29.313 00:22:29.313 real 0m24.424s 00:22:29.313 user 1m4.010s 00:22:29.313 sys 0m8.158s 00:22:29.313 10:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.313 10:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:29.313 ************************************ 00:22:29.313 END TEST nvmf_perf 00:22:29.313 ************************************ 00:22:29.313 10:36:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:29.313 10:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:29.313 10:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.313 10:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.313 ************************************ 00:22:29.313 START TEST nvmf_fio_host 00:22:29.313 ************************************ 00:22:29.313 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:29.313 * Looking for test storage... 00:22:29.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:29.313 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.314 --rc genhtml_branch_coverage=1 00:22:29.314 --rc genhtml_function_coverage=1 00:22:29.314 --rc genhtml_legend=1 00:22:29.314 --rc geninfo_all_blocks=1 00:22:29.314 --rc geninfo_unexecuted_blocks=1 00:22:29.314 00:22:29.314 ' 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.314 --rc genhtml_branch_coverage=1 00:22:29.314 --rc genhtml_function_coverage=1 00:22:29.314 --rc genhtml_legend=1 00:22:29.314 --rc geninfo_all_blocks=1 00:22:29.314 --rc geninfo_unexecuted_blocks=1 00:22:29.314 00:22:29.314 ' 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.314 --rc genhtml_branch_coverage=1 00:22:29.314 --rc genhtml_function_coverage=1 00:22:29.314 --rc genhtml_legend=1 00:22:29.314 --rc geninfo_all_blocks=1 00:22:29.314 --rc geninfo_unexecuted_blocks=1 00:22:29.314 00:22:29.314 ' 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.314 --rc genhtml_branch_coverage=1 00:22:29.314 --rc genhtml_function_coverage=1 00:22:29.314 --rc genhtml_legend=1 00:22:29.314 --rc geninfo_all_blocks=1 00:22:29.314 --rc geninfo_unexecuted_blocks=1 00:22:29.314 00:22:29.314 ' 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.314 10:36:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.314 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.314 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.314 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:29.315 10:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.589 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:34.590 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:34.590 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:34.590 Found net devices under 0000:af:00.0: cvl_0_0 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:34.590 Found net devices under 0000:af:00.1: cvl_0_1 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.590 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.849 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.849 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.849 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.849 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.849 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.108 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.108 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.108 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.108 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:22:35.108 00:22:35.108 --- 10.0.0.2 ping statistics --- 00:22:35.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.108 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:22:35.108 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:22:35.108 00:22:35.108 --- 10.0.0.1 ping statistics --- 00:22:35.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.109 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1606337 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1606337 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1606337 ']' 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.109 10:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.109 [2024-12-12 10:36:09.035044] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:22:35.109 [2024-12-12 10:36:09.035087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.109 [2024-12-12 10:36:09.112268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.367 [2024-12-12 10:36:09.153818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.367 [2024-12-12 10:36:09.153865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.367 [2024-12-12 10:36:09.153872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.367 [2024-12-12 10:36:09.153879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.367 [2024-12-12 10:36:09.153885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.367 [2024-12-12 10:36:09.155235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.367 [2024-12-12 10:36:09.155343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.367 [2024-12-12 10:36:09.155429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.367 [2024-12-12 10:36:09.155430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.367 10:36:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.367 10:36:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:35.367 10:36:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:35.625 [2024-12-12 10:36:09.445658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.625 10:36:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:35.625 10:36:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.625 10:36:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.625 10:36:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:35.883 Malloc1 00:22:35.883 10:36:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.141 10:36:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:36.397 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.397 [2024-12-12 10:36:10.369043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.397 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:36.655 10:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:36.912 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:36.912 fio-3.35 00:22:36.912 Starting 1 thread 00:22:39.440 00:22:39.440 test: (groupid=0, jobs=1): err= 0: pid=1607244: Thu Dec 12 10:36:13 2024 00:22:39.440 read: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(94.1MiB/2005msec) 00:22:39.440 slat (nsec): min=1496, max=250212, avg=1657.65, stdev=2206.66 00:22:39.440 clat (usec): min=3107, max=9767, avg=5881.85, stdev=444.69 00:22:39.440 lat (usec): min=3142, max=9769, avg=5883.51, stdev=444.55 00:22:39.440 clat percentiles (usec): 00:22:39.440 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:22:39.440 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5866], 60.00th=[ 5997], 00:22:39.440 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:22:39.440 | 99.00th=[ 6849], 99.50th=[ 6915], 99.90th=[ 8225], 99.95th=[ 8979], 00:22:39.440 | 99.99th=[ 9765] 00:22:39.440 bw ( KiB/s): min=46930, max=48552, per=99.90%, avg=47990.50, stdev=755.52, samples=4 00:22:39.440 iops : min=11732, max=12138, avg=11998.00, stdev=189.10, samples=4 00:22:39.440 write: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(93.7MiB/2005msec); 0 zone resets 00:22:39.440 slat (nsec): min=1549, max=225753, avg=1732.74, stdev=1645.66 00:22:39.440 clat (usec): min=2419, max=9453, avg=4753.18, stdev=361.53 00:22:39.440 lat (usec): min=2434, max=9455, avg=4754.91, stdev=361.47 00:22:39.440 clat percentiles (usec): 00:22:39.440 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:22:39.440 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4817], 00:22:39.440 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:22:39.440 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 7635], 99.95th=[ 8356], 00:22:39.440 | 99.99th=[ 8979] 00:22:39.440 bw ( KiB/s): min=47488, max=48384, per=99.96%, avg=47818.00, stdev=414.84, samples=4 00:22:39.440 iops : min=11872, max=12096, avg=11954.50, stdev=103.71, samples=4 00:22:39.440 lat (msec) : 4=0.82%, 10=99.18% 00:22:39.440 cpu : usr=74.45%, sys=24.65%, ctx=58, majf=0, minf=3 00:22:39.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:39.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:39.440 issued rwts: total=24078,23978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:39.440 00:22:39.440 Run status group 0 (all jobs): 00:22:39.440 READ: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=94.1MiB (98.6MB), run=2005-2005msec 00:22:39.440 WRITE: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=93.7MiB (98.2MB), run=2005-2005msec 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:39.440 10:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:39.698 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:39.698 fio-3.35 00:22:39.698 Starting 1 thread 00:22:42.227 00:22:42.227 test: (groupid=0, jobs=1): err= 0: pid=1607806: Thu Dec 12 10:36:16 2024 00:22:42.227 read: IOPS=10.9k, BW=170MiB/s (179MB/s)(349MiB/2046msec) 00:22:42.227 slat (nsec): min=2447, max=90327, avg=2763.80, stdev=1201.11 00:22:42.227 clat (usec): min=1641, max=50286, avg=6821.71, stdev=3116.49 00:22:42.227 lat (usec): min=1644, max=50289, avg=6824.47, stdev=3116.53 00:22:42.227 clat percentiles (usec): 00:22:42.227 | 1.00th=[ 3556], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 5276], 00:22:42.227 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6521], 60.00th=[ 7046], 00:22:42.227 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8848], 95.00th=[ 9634], 00:22:42.227 | 99.00th=[11600], 99.50th=[12518], 99.90th=[49021], 99.95th=[49546], 00:22:42.227 | 99.99th=[50070] 00:22:42.227 bw ( KiB/s): min=84352, max=94208, per=51.36%, avg=89592.00, stdev=4603.73, samples=4 00:22:42.227 iops : min= 5272, max= 5888, avg=5599.50, stdev=287.73, samples=4 00:22:42.227 write: IOPS=6345, BW=99.1MiB/s (104MB/s)(183MiB/1848msec); 0 zone resets 00:22:42.227 slat (usec): min=29, max=334, avg=31.04, stdev= 6.17 00:22:42.227 clat (usec): min=4141, max=49901, avg=8543.69, stdev=2579.36 00:22:42.227 lat (usec): min=4170, max=49931, avg=8574.73, stdev=2579.77 00:22:42.227 clat percentiles (usec): 00:22:42.227 | 1.00th=[ 5538], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7177], 00:22:42.227 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:22:42.227 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10552], 95.00th=[11338], 00:22:42.227 | 99.00th=[12518], 99.50th=[13173], 99.90th=[49021], 99.95th=[49546], 00:22:42.227 | 99.99th=[50070] 00:22:42.227 bw ( KiB/s): min=88800, max=98304, per=92.15%, avg=93552.00, stdev=4376.60, samples=4 00:22:42.227 iops : min= 5550, max= 6144, avg=5847.00, stdev=273.54, samples=4 00:22:42.227 lat (msec) : 2=0.04%, 4=2.25%, 10=90.02%, 20=7.32%, 50=0.35% 00:22:42.227 lat (msec) : 100=0.02% 00:22:42.227 cpu : usr=87.38%, sys=11.74%, ctx=58, majf=0, minf=3 00:22:42.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:42.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:42.228 issued rwts: total=22308,11726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:42.228 00:22:42.228 Run status group 0 (all jobs): 00:22:42.228 READ: bw=170MiB/s (179MB/s), 170MiB/s-170MiB/s (179MB/s-179MB/s), io=349MiB (365MB), run=2046-2046msec 00:22:42.228 WRITE: bw=99.1MiB/s (104MB/s), 99.1MiB/s-99.1MiB/s (104MB/s-104MB/s), io=183MiB (192MB), run=1848-1848msec 00:22:42.228 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.486 rmmod nvme_tcp 00:22:42.486 rmmod nvme_fabrics 00:22:42.486 rmmod nvme_keyring 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1606337 ']' 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1606337 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1606337 ']' 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1606337 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1606337 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1606337' 00:22:42.486 killing process with pid 1606337 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1606337 00:22:42.486 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1606337 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.745 10:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:45.281 00:22:45.281 real 0m15.898s 00:22:45.281 user 0m47.448s 00:22:45.281 sys 0m6.441s 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.281 ************************************ 00:22:45.281 END TEST nvmf_fio_host 00:22:45.281 ************************************ 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.281 ************************************ 00:22:45.281 START TEST nvmf_failover 00:22:45.281 ************************************ 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:45.281 * Looking for test storage... 00:22:45.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:45.281 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.282 --rc genhtml_branch_coverage=1 00:22:45.282 --rc genhtml_function_coverage=1 00:22:45.282 --rc genhtml_legend=1 00:22:45.282 --rc geninfo_all_blocks=1 00:22:45.282 --rc geninfo_unexecuted_blocks=1 00:22:45.282 00:22:45.282 ' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.282 --rc genhtml_branch_coverage=1 00:22:45.282 --rc genhtml_function_coverage=1 00:22:45.282 --rc genhtml_legend=1 00:22:45.282 --rc geninfo_all_blocks=1 00:22:45.282 --rc geninfo_unexecuted_blocks=1 00:22:45.282 00:22:45.282 ' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.282 --rc genhtml_branch_coverage=1 00:22:45.282 --rc genhtml_function_coverage=1 00:22:45.282 --rc genhtml_legend=1 00:22:45.282 --rc geninfo_all_blocks=1 00:22:45.282 --rc geninfo_unexecuted_blocks=1 00:22:45.282 00:22:45.282 ' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.282 --rc genhtml_branch_coverage=1 00:22:45.282 --rc genhtml_function_coverage=1 00:22:45.282 --rc genhtml_legend=1 00:22:45.282 --rc geninfo_all_blocks=1 00:22:45.282 --rc geninfo_unexecuted_blocks=1 00:22:45.282 00:22:45.282 ' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.282 10:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:50.721 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:50.721 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.721 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:50.722 Found net devices under 0000:af:00.0: cvl_0_0 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:50.722 Found net devices under 0000:af:00.1: cvl_0_1 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.722 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:22:50.981 00:22:50.981 --- 10.0.0.2 ping statistics --- 00:22:50.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.981 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:22:50.981 00:22:50.981 --- 10.0.0.1 ping statistics --- 00:22:50.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.981 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1611595 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1611595 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1611595 ']' 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.981 10:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.240 [2024-12-12 10:36:25.030641] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:22:51.240 [2024-12-12 10:36:25.030692] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.240 [2024-12-12 10:36:25.109551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:51.240 [2024-12-12 10:36:25.149632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.240 [2024-12-12 10:36:25.149667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.240 [2024-12-12 10:36:25.149674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.240 [2024-12-12 10:36:25.149679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.240 [2024-12-12 10:36:25.149684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.240 [2024-12-12 10:36:25.151051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.240 [2024-12-12 10:36:25.151159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.240 [2024-12-12 10:36:25.151160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.240 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.240 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:51.240 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.240 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.240 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.498 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.498 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:51.498 [2024-12-12 10:36:25.444662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.498 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:51.757 Malloc0 00:22:51.757 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.015 10:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:52.274 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.274 [2024-12-12 10:36:26.265063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.275 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:52.534 [2024-12-12 10:36:26.441539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:52.534 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:52.793 [2024-12-12 10:36:26.634164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1611964 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1611964 /var/tmp/bdevperf.sock 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1611964 ']' 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.793 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:53.052 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.052 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:53.052 10:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:53.311 NVMe0n1 00:22:53.311 10:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:53.878 00:22:53.878 10:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1612022 00:22:53.878 10:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.878 10:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:54.814 10:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.073 [2024-12-12 10:36:28.879668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.879999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.073 [2024-12-12 10:36:28.880127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 [2024-12-12 10:36:28.880261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7560 is same with the state(6) to be set 00:22:55.074 10:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:58.359 10:36:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:58.359 00:22:58.359 10:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:58.616 [2024-12-12 10:36:32.533729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.616 [2024-12-12 10:36:32.533767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.616 [2024-12-12 10:36:32.533775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.616 [2024-12-12 10:36:32.533781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 [2024-12-12 10:36:32.533857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e81c0 is same with the state(6) to be set 00:22:58.617 10:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:01.904 10:36:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.904 [2024-12-12 10:36:35.743846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.904 10:36:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:02.841 10:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:03.100 [2024-12-12 10:36:36.959600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.959996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.101 [2024-12-12 10:36:36.960093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 [2024-12-12 10:36:36.960099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 [2024-12-12 10:36:36.960105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 [2024-12-12 10:36:36.960110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 [2024-12-12 10:36:36.960116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 [2024-12-12 10:36:36.960122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 [2024-12-12 10:36:36.960128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 [2024-12-12 10:36:36.960133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 [2024-12-12 10:36:36.960139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 [2024-12-12 10:36:36.960145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534710 is same with the state(6) to be set 00:23:03.102 10:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1612022 00:23:09.683 { 00:23:09.683 "results": [ 00:23:09.683 { 00:23:09.683 "job": "NVMe0n1", 00:23:09.683 "core_mask": "0x1", 00:23:09.683 "workload": "verify", 00:23:09.683 "status": "finished", 00:23:09.683 "verify_range": { 00:23:09.683 "start": 0, 00:23:09.683 "length": 16384 00:23:09.683 }, 00:23:09.683 "queue_depth": 128, 00:23:09.683 "io_size": 4096, 00:23:09.683 "runtime": 15.012367, 00:23:09.683 "iops": 11235.137004044733, 00:23:09.683 "mibps": 43.88725392204974, 00:23:09.683 "io_failed": 10541, 00:23:09.683 "io_timeout": 0, 00:23:09.683 "avg_latency_us": 10701.083305940165, 00:23:09.683 "min_latency_us": 442.7580952380952, 00:23:09.683 "max_latency_us": 22719.146666666667 00:23:09.683 } 00:23:09.683 ], 00:23:09.683 "core_count": 1 00:23:09.683 } 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1611964 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1611964 ']' 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1611964 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611964 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611964' 00:23:09.683 killing process with pid 1611964 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1611964 00:23:09.683 10:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1611964 00:23:09.683 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:09.683 [2024-12-12 10:36:26.704342] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:23:09.683 [2024-12-12 10:36:26.704393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611964 ] 00:23:09.683 [2024-12-12 10:36:26.777873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.683 [2024-12-12 10:36:26.818353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.683 Running I/O for 15 seconds... 00:23:09.683 11426.00 IOPS, 44.63 MiB/s [2024-12-12T09:36:43.706Z] [2024-12-12 10:36:28.881641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.683 [2024-12-12 10:36:28.881917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.683 [2024-12-12 10:36:28.881924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.881932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.881939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.881946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.881953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.881961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.881967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.881975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.881981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.881989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.881996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.684 [2024-12-12 10:36:28.882275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.684 [2024-12-12 10:36:28.882441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.684 [2024-12-12 10:36:28.882448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.685 [2024-12-12 10:36:28.882616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.685 [2024-12-12 10:36:28.882943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.685 [2024-12-12 10:36:28.882950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.882956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.882964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.882970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.882978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.882984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.882992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.882998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.686 [2024-12-12 10:36:28.883296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.686 [2024-12-12 10:36:28.883322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101528 len:8 PRP1 0x0 PRP2 0x0 00:23:09.686 [2024-12-12 10:36:28.883329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.686 [2024-12-12 10:36:28.883344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.686 [2024-12-12 10:36:28.883349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:23:09.686 [2024-12-12 10:36:28.883355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.686 [2024-12-12 10:36:28.883367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.686 [2024-12-12 10:36:28.883372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101544 len:8 PRP1 0x0 PRP2 0x0 00:23:09.686 [2024-12-12 10:36:28.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.686 [2024-12-12 10:36:28.883389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.686 [2024-12-12 10:36:28.883394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101552 len:8 PRP1 0x0 PRP2 0x0 00:23:09.686 [2024-12-12 10:36:28.883400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.686 [2024-12-12 10:36:28.883411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.686 [2024-12-12 10:36:28.883417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101560 len:8 PRP1 0x0 PRP2 0x0 00:23:09.686 [2024-12-12 10:36:28.883423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.686 [2024-12-12 10:36:28.883434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.686 [2024-12-12 10:36:28.883439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:23:09.686 [2024-12-12 10:36:28.883445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.686 [2024-12-12 10:36:28.883456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.686 [2024-12-12 10:36:28.883461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101576 len:8 PRP1 0x0 PRP2 0x0 00:23:09.686 [2024-12-12 10:36:28.883467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.686 [2024-12-12 10:36:28.883473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.687 [2024-12-12 10:36:28.883479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.687 [2024-12-12 10:36:28.883485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:23:09.687 [2024-12-12 10:36:28.883492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.883498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.687 [2024-12-12 10:36:28.883503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.687 [2024-12-12 10:36:28.883509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101592 len:8 PRP1 0x0 PRP2 0x0 00:23:09.687 [2024-12-12 10:36:28.883515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.883521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.687 [2024-12-12 10:36:28.883526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.687 [2024-12-12 10:36:28.883531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101600 len:8 PRP1 0x0 PRP2 0x0 00:23:09.687 [2024-12-12 10:36:28.883537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.883543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.687 [2024-12-12 10:36:28.883548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.687 [2024-12-12 10:36:28.883553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101608 len:8 PRP1 0x0 PRP2 0x0 00:23:09.687 [2024-12-12 10:36:28.883559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.883565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.687 [2024-12-12 10:36:28.883573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.687 [2024-12-12 10:36:28.883579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101616 len:8 PRP1 0x0 PRP2 0x0 00:23:09.687 [2024-12-12 10:36:28.883585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.883591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.687 [2024-12-12 10:36:28.883596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.687 [2024-12-12 10:36:28.883601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101624 len:8 PRP1 0x0 PRP2 0x0 00:23:09.687 [2024-12-12 10:36:28.883607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.883613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.687 [2024-12-12 10:36:28.883618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.687 [2024-12-12 10:36:28.883623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101632 len:8 PRP1 0x0 PRP2 0x0 00:23:09.687 [2024-12-12 10:36:28.883629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.895912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.687 [2024-12-12 10:36:28.895923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.687 [2024-12-12 10:36:28.895932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101640 len:8 PRP1 0x0 PRP2 0x0 00:23:09.687 [2024-12-12 10:36:28.895941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.895950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.687 [2024-12-12 10:36:28.895960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.687 [2024-12-12 10:36:28.895967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101648 len:8 PRP1 0x0 PRP2 0x0 00:23:09.687 [2024-12-12 10:36:28.895975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.896024] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:09.687 [2024-12-12 10:36:28.896050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.687 [2024-12-12 10:36:28.896061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.896071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.687 [2024-12-12 10:36:28.896079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.896088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.687 [2024-12-12 10:36:28.896097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.896106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.687 [2024-12-12 10:36:28.896114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:28.896122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:09.687 [2024-12-12 10:36:28.896166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ead5d0 (9): Bad file descriptor 00:23:09.687 [2024-12-12 10:36:28.899903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:09.687 [2024-12-12 10:36:29.055113] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:09.687 10508.00 IOPS, 41.05 MiB/s [2024-12-12T09:36:43.710Z] 10834.00 IOPS, 42.32 MiB/s [2024-12-12T09:36:43.710Z] 10949.00 IOPS, 42.77 MiB/s [2024-12-12T09:36:43.710Z] [2024-12-12 10:36:32.533987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.687 [2024-12-12 10:36:32.534020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:32.534035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.687 [2024-12-12 10:36:32.534043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:32.534052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.687 [2024-12-12 10:36:32.534058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:32.534067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.687 [2024-12-12 10:36:32.534073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:32.534081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.687 [2024-12-12 10:36:32.534088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:32.534100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.687 [2024-12-12 10:36:32.534107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:32.534115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.687 [2024-12-12 10:36:32.534122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:32.534130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.687 [2024-12-12 10:36:32.534136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:32.534145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.687 [2024-12-12 10:36:32.534151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.687 [2024-12-12 10:36:32.534159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.688 [2024-12-12 10:36:32.534356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.688 [2024-12-12 10:36:32.534516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.688 [2024-12-12 10:36:32.534525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.534992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.534999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.535007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.535013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.535023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.535029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.535037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.535044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.535052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.689 [2024-12-12 10:36:32.535059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.689 [2024-12-12 10:36:32.535067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.690 [2024-12-12 10:36:32.535549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.690 [2024-12-12 10:36:32.535585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85632 len:8 PRP1 0x0 PRP2 0x0 00:23:09.690 [2024-12-12 10:36:32.535595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.690 [2024-12-12 10:36:32.535605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85640 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85648 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85656 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85664 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85672 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85680 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85688 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85696 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85704 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85712 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85720 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85728 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85736 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85744 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85752 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85760 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.535976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.535981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.535986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85768 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.535994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.536000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.536005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.536010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85776 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.536016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.536022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.536028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.536033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85784 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.546708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.546723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.546731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.546739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85792 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.546748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.546756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.546763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.546771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85800 len:8 PRP1 0x0 PRP2 0x0 00:23:09.691 [2024-12-12 10:36:32.546779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-12-12 10:36:32.546788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.691 [2024-12-12 10:36:32.546795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.691 [2024-12-12 10:36:32.546803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84968 len:8 PRP1 0x0 PRP2 0x0 00:23:09.692 [2024-12-12 10:36:32.546811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:32.546858] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:09.692 [2024-12-12 10:36:32.546886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.692 [2024-12-12 10:36:32.546897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:32.546910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.692 [2024-12-12 10:36:32.546919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:32.546928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.692 [2024-12-12 10:36:32.546937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:32.546946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.692 [2024-12-12 10:36:32.546955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:32.546965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:09.692 [2024-12-12 10:36:32.546992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ead5d0 (9): Bad file descriptor 00:23:09.692 [2024-12-12 10:36:32.550731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:09.692 [2024-12-12 10:36:32.581768] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:09.692 10912.40 IOPS, 42.63 MiB/s [2024-12-12T09:36:43.715Z] 10982.17 IOPS, 42.90 MiB/s [2024-12-12T09:36:43.715Z] 11059.57 IOPS, 43.20 MiB/s [2024-12-12T09:36:43.715Z] 11111.75 IOPS, 43.41 MiB/s [2024-12-12T09:36:43.715Z] 11156.33 IOPS, 43.58 MiB/s [2024-12-12T09:36:43.715Z] [2024-12-12 10:36:36.961431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-12-12 10:36:36.961874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-12-12 10:36:36.961883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-12-12 10:36:36.961889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.961897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-12-12 10:36:36.961903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.961911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-12-12 10:36:36.961917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.961925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-12-12 10:36:36.961931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.961939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-12-12 10:36:36.961948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.961955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-12-12 10:36:36.961962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.961969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-12-12 10:36:36.961976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.961984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-12-12 10:36:36.961990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.961998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.693 [2024-12-12 10:36:36.962304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-12-12 10:36:36.962312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-12-12 10:36:36.962578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-12-12 10:36:36.962837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-12-12 10:36:36.962845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.962991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.962998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.963012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.963026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.695 [2024-12-12 10:36:36.963041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104448 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104456 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104464 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103744 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103752 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103760 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103768 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103776 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103784 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103792 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103800 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103808 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.695 [2024-12-12 10:36:36.963349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103816 len:8 PRP1 0x0 PRP2 0x0 00:23:09.695 [2024-12-12 10:36:36.963355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.695 [2024-12-12 10:36:36.963361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.695 [2024-12-12 10:36:36.963366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.696 [2024-12-12 10:36:36.963372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103824 len:8 PRP1 0x0 PRP2 0x0 00:23:09.696 [2024-12-12 10:36:36.963378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.963384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.696 [2024-12-12 10:36:36.963389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.696 [2024-12-12 10:36:36.963394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103832 len:8 PRP1 0x0 PRP2 0x0 00:23:09.696 [2024-12-12 10:36:36.963400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.963408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.696 [2024-12-12 10:36:36.963412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.696 [2024-12-12 10:36:36.974919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103840 len:8 PRP1 0x0 PRP2 0x0 00:23:09.696 [2024-12-12 10:36:36.974932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.974942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.696 [2024-12-12 10:36:36.974949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.696 [2024-12-12 10:36:36.974956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103848 len:8 PRP1 0x0 PRP2 0x0 00:23:09.696 [2024-12-12 10:36:36.974964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.974973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.696 [2024-12-12 10:36:36.974979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.696 [2024-12-12 10:36:36.974986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103856 len:8 PRP1 0x0 PRP2 0x0 00:23:09.696 [2024-12-12 10:36:36.974995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.975005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.696 [2024-12-12 10:36:36.975012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.696 [2024-12-12 10:36:36.975018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103864 len:8 PRP1 0x0 PRP2 0x0 00:23:09.696 [2024-12-12 10:36:36.975027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.975074] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:09.696 [2024-12-12 10:36:36.975102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.696 [2024-12-12 10:36:36.975111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.975122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.696 [2024-12-12 10:36:36.975129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.975139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.696 [2024-12-12 10:36:36.975147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.975156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.696 [2024-12-12 10:36:36.975165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.696 [2024-12-12 10:36:36.975174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:09.696 [2024-12-12 10:36:36.975201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ead5d0 (9): Bad file descriptor 00:23:09.696 [2024-12-12 10:36:36.978969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:09.696 [2024-12-12 10:36:37.008769] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:09.696 11133.10 IOPS, 43.49 MiB/s [2024-12-12T09:36:43.719Z] 11141.55 IOPS, 43.52 MiB/s [2024-12-12T09:36:43.719Z] 11165.17 IOPS, 43.61 MiB/s [2024-12-12T09:36:43.719Z] 11195.92 IOPS, 43.73 MiB/s [2024-12-12T09:36:43.719Z] 11221.86 IOPS, 43.84 MiB/s [2024-12-12T09:36:43.719Z] 11235.93 IOPS, 43.89 MiB/s 00:23:09.696 Latency(us) 00:23:09.696 [2024-12-12T09:36:43.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.696 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:09.696 Verification LBA range: start 0x0 length 0x4000 00:23:09.696 NVMe0n1 : 15.01 11235.14 43.89 702.15 0.00 10701.08 442.76 22719.15 00:23:09.696 [2024-12-12T09:36:43.719Z] =================================================================================================================== 00:23:09.696 [2024-12-12T09:36:43.719Z] Total : 11235.14 43.89 702.15 0.00 10701.08 442.76 22719.15 00:23:09.696 Received shutdown signal, test time was about 15.000000 seconds 00:23:09.696 00:23:09.696 Latency(us) 00:23:09.696 [2024-12-12T09:36:43.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.696 [2024-12-12T09:36:43.719Z] =================================================================================================================== 00:23:09.696 [2024-12-12T09:36:43.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1614514 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1614514 /var/tmp/bdevperf.sock 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1614514 ']' 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:09.696 [2024-12-12 10:36:43.502898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:09.696 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:09.955 [2024-12-12 10:36:43.699464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:09.955 10:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:10.214 NVMe0n1 00:23:10.214 10:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:10.474 00:23:10.474 10:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:11.042 00:23:11.042 10:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.042 10:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:11.042 10:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.301 10:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:14.591 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:14.591 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:14.591 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.591 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1615338 00:23:14.591 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1615338 00:23:15.528 { 00:23:15.528 "results": [ 00:23:15.528 { 00:23:15.528 "job": "NVMe0n1", 00:23:15.528 "core_mask": "0x1", 00:23:15.528 "workload": "verify", 00:23:15.528 "status": "finished", 00:23:15.528 "verify_range": { 00:23:15.528 "start": 0, 00:23:15.528 "length": 16384 00:23:15.528 }, 00:23:15.528 "queue_depth": 128, 00:23:15.528 "io_size": 4096, 00:23:15.528 "runtime": 1.004482, 00:23:15.528 "iops": 11427.780686961041, 00:23:15.528 "mibps": 44.63976830844157, 00:23:15.528 "io_failed": 0, 00:23:15.528 "io_timeout": 0, 00:23:15.528 "avg_latency_us": 11159.663456996004, 00:23:15.528 "min_latency_us": 397.89714285714285, 00:23:15.528 "max_latency_us": 8987.794285714286 00:23:15.528 } 00:23:15.528 ], 00:23:15.528 "core_count": 1 00:23:15.528 } 00:23:15.528 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:15.528 [2024-12-12 10:36:43.126889] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:23:15.528 [2024-12-12 10:36:43.126943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614514 ] 00:23:15.528 [2024-12-12 10:36:43.202016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.528 [2024-12-12 10:36:43.239143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.528 [2024-12-12 10:36:45.191890] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:15.528 [2024-12-12 10:36:45.191936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.528 [2024-12-12 10:36:45.191948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.528 [2024-12-12 10:36:45.191956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.528 [2024-12-12 10:36:45.191963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.528 [2024-12-12 10:36:45.191971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.528 [2024-12-12 10:36:45.191977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.528 [2024-12-12 10:36:45.191984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.528 [2024-12-12 10:36:45.191991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.528 [2024-12-12 10:36:45.191997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:15.528 [2024-12-12 10:36:45.192022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:15.528 [2024-12-12 10:36:45.192036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15675d0 (9): Bad file descriptor 00:23:15.528 [2024-12-12 10:36:45.293869] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:15.528 Running I/O for 1 seconds... 00:23:15.528 11351.00 IOPS, 44.34 MiB/s 00:23:15.528 Latency(us) 00:23:15.528 [2024-12-12T09:36:49.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.528 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:15.528 Verification LBA range: start 0x0 length 0x4000 00:23:15.528 NVMe0n1 : 1.00 11427.78 44.64 0.00 0.00 11159.66 397.90 8987.79 00:23:15.528 [2024-12-12T09:36:49.551Z] =================================================================================================================== 00:23:15.528 [2024-12-12T09:36:49.551Z] Total : 11427.78 44.64 0.00 0.00 11159.66 397.90 8987.79 00:23:15.528 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.528 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:15.787 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:16.045 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.045 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:16.303 10:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:16.561 10:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:19.844 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.844 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:19.844 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1614514 00:23:19.844 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1614514 ']' 00:23:19.844 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1614514 00:23:19.844 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:19.845 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.845 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1614514 00:23:19.845 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.845 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.845 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1614514' 00:23:19.845 killing process with pid 1614514 00:23:19.845 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1614514 00:23:19.845 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1614514 00:23:19.845 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:19.845 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.103 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:20.103 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:20.103 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:20.103 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.103 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:20.103 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.103 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:20.103 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.103 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.103 rmmod nvme_tcp 00:23:20.103 rmmod nvme_fabrics 00:23:20.103 rmmod nvme_keyring 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1611595 ']' 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1611595 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1611595 ']' 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1611595 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611595 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611595' 00:23:20.103 killing process with pid 1611595 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1611595 00:23:20.103 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1611595 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.362 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.900 00:23:22.900 real 0m37.594s 00:23:22.900 user 1m58.781s 00:23:22.900 sys 0m7.954s 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.900 ************************************ 00:23:22.900 END TEST nvmf_failover 00:23:22.900 ************************************ 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.900 ************************************ 00:23:22.900 START TEST nvmf_host_discovery 00:23:22.900 ************************************ 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:22.900 * Looking for test storage... 00:23:22.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:22.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.900 --rc genhtml_branch_coverage=1 00:23:22.900 --rc genhtml_function_coverage=1 00:23:22.900 --rc genhtml_legend=1 00:23:22.900 --rc geninfo_all_blocks=1 00:23:22.900 --rc geninfo_unexecuted_blocks=1 00:23:22.900 00:23:22.900 ' 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:22.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.900 --rc genhtml_branch_coverage=1 00:23:22.900 --rc genhtml_function_coverage=1 00:23:22.900 --rc genhtml_legend=1 00:23:22.900 --rc geninfo_all_blocks=1 00:23:22.900 --rc geninfo_unexecuted_blocks=1 00:23:22.900 00:23:22.900 ' 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:22.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.900 --rc genhtml_branch_coverage=1 00:23:22.900 --rc genhtml_function_coverage=1 00:23:22.900 --rc genhtml_legend=1 00:23:22.900 --rc geninfo_all_blocks=1 00:23:22.900 --rc geninfo_unexecuted_blocks=1 00:23:22.900 00:23:22.900 ' 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:22.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.900 --rc genhtml_branch_coverage=1 00:23:22.900 --rc genhtml_function_coverage=1 00:23:22.900 --rc genhtml_legend=1 00:23:22.900 --rc geninfo_all_blocks=1 00:23:22.900 --rc geninfo_unexecuted_blocks=1 00:23:22.900 00:23:22.900 ' 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:22.900 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:22.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.901 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.473 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.473 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:29.473 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:29.474 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:29.474 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:29.474 Found net devices under 0000:af:00.0: cvl_0_0 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:29.474 Found net devices under 0000:af:00.1: cvl_0_1 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:29.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:23:29.474 00:23:29.474 --- 10.0.0.2 ping statistics --- 00:23:29.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.474 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:23:29.474 00:23:29.474 --- 10.0.0.1 ping statistics --- 00:23:29.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.474 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.474 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1619701 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1619701 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1619701 ']' 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 [2024-12-12 10:37:02.586674] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:23:29.475 [2024-12-12 10:37:02.586719] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.475 [2024-12-12 10:37:02.661189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.475 [2024-12-12 10:37:02.700359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.475 [2024-12-12 10:37:02.700394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.475 [2024-12-12 10:37:02.700401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.475 [2024-12-12 10:37:02.700407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.475 [2024-12-12 10:37:02.700412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.475 [2024-12-12 10:37:02.700911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 [2024-12-12 10:37:02.836440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 [2024-12-12 10:37:02.848616] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 null0 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 null1 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1619907 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1619907 /tmp/host.sock 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1619907 ']' 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:29.475 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 [2024-12-12 10:37:02.922963] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:23:29.475 [2024-12-12 10:37:02.923002] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619907 ] 00:23:29.475 [2024-12-12 10:37:02.995296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.475 [2024-12-12 10:37:03.037148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.475 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.476 [2024-12-12 10:37:03.450134] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.476 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:29.735 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:30.302 [2024-12-12 10:37:04.153894] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:30.302 [2024-12-12 10:37:04.153916] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:30.302 [2024-12-12 10:37:04.153928] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.302 [2024-12-12 10:37:04.240171] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:30.302 [2024-12-12 10:37:04.294706] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:30.302 [2024-12-12 10:37:04.295440] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x158bfa0:1 started. 00:23:30.302 [2024-12-12 10:37:04.296818] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:30.302 [2024-12-12 10:37:04.296833] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:30.560 [2024-12-12 10:37:04.343220] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x158bfa0 was disconnected and freed. delete nvme_qpair. 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:30.819 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:30.820 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.820 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.820 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.079 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:31.337 [2024-12-12 10:37:05.107183] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x158c320:1 started. 00:23:31.337 [2024-12-12 10:37:05.114952] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x158c320 was disconnected and freed. delete nvme_qpair. 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.337 [2024-12-12 10:37:05.190805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:31.337 [2024-12-12 10:37:05.191462] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:31.337 [2024-12-12 10:37:05.191481] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.337 [2024-12-12 10:37:05.279053] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:31.337 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:31.596 [2024-12-12 10:37:05.381765] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:31.596 [2024-12-12 10:37:05.381803] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:31.596 [2024-12-12 10:37:05.381811] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:31.596 [2024-12-12 10:37:05.381815] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.533 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.533 [2024-12-12 10:37:06.450683] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:32.533 [2024-12-12 10:37:06.450705] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:32.533 [2024-12-12 10:37:06.451527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.533 [2024-12-12 10:37:06.451542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.533 [2024-12-12 10:37:06.451550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.533 [2024-12-12 10:37:06.451557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.533 [2024-12-12 10:37:06.451564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.533 [2024-12-12 10:37:06.451575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.533 [2024-12-12 10:37:06.451582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.533 [2024-12-12 10:37:06.451589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.534 [2024-12-12 10:37:06.451596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c410 is same with the state(6) to be set 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.534 [2024-12-12 10:37:06.461535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c410 (9): Bad file descriptor 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:32.534 [2024-12-12 10:37:06.471575] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:32.534 [2024-12-12 10:37:06.471588] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:32.534 [2024-12-12 10:37:06.471595] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:32.534 [2024-12-12 10:37:06.471599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:32.534 [2024-12-12 10:37:06.471616] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:32.534 [2024-12-12 10:37:06.471873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.534 [2024-12-12 10:37:06.471890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155c410 with addr=10.0.0.2, port=4420 00:23:32.534 [2024-12-12 10:37:06.471898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c410 is same with the state(6) to be set 00:23:32.534 [2024-12-12 10:37:06.471910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c410 (9): Bad file descriptor 00:23:32.534 [2024-12-12 10:37:06.471933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:32.534 [2024-12-12 10:37:06.471941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:32.534 [2024-12-12 10:37:06.471948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:32.534 [2024-12-12 10:37:06.471954] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:32.534 [2024-12-12 10:37:06.471959] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:32.534 [2024-12-12 10:37:06.471963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.534 [2024-12-12 10:37:06.481645] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:32.534 [2024-12-12 10:37:06.481655] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:32.534 [2024-12-12 10:37:06.481659] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:32.534 [2024-12-12 10:37:06.481663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:32.534 [2024-12-12 10:37:06.481675] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:32.534 [2024-12-12 10:37:06.481835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.534 [2024-12-12 10:37:06.481847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155c410 with addr=10.0.0.2, port=4420 00:23:32.534 [2024-12-12 10:37:06.481854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c410 is same with the state(6) to be set 00:23:32.534 [2024-12-12 10:37:06.481864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c410 (9): Bad file descriptor 00:23:32.534 [2024-12-12 10:37:06.481873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:32.534 [2024-12-12 10:37:06.481880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:32.534 [2024-12-12 10:37:06.481886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:32.534 [2024-12-12 10:37:06.481892] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:32.534 [2024-12-12 10:37:06.481896] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:32.534 [2024-12-12 10:37:06.481900] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:32.534 [2024-12-12 10:37:06.491706] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:32.534 [2024-12-12 10:37:06.491716] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:32.534 [2024-12-12 10:37:06.491719] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:32.534 [2024-12-12 10:37:06.491723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:32.534 [2024-12-12 10:37:06.491738] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:32.534 [2024-12-12 10:37:06.491902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.534 [2024-12-12 10:37:06.491913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155c410 with addr=10.0.0.2, port=4420 00:23:32.534 [2024-12-12 10:37:06.491920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c410 is same with the state(6) to be set 00:23:32.534 [2024-12-12 10:37:06.491930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c410 (9): Bad file descriptor 00:23:32.534 [2024-12-12 10:37:06.491940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:32.534 [2024-12-12 10:37:06.491946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:32.534 [2024-12-12 10:37:06.491952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:32.534 [2024-12-12 10:37:06.491958] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:32.534 [2024-12-12 10:37:06.491962] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:32.534 [2024-12-12 10:37:06.491966] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:32.534 [2024-12-12 10:37:06.501768] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:32.534 [2024-12-12 10:37:06.501781] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:32.534 [2024-12-12 10:37:06.501785] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:32.534 [2024-12-12 10:37:06.501789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:32.534 [2024-12-12 10:37:06.501804] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:32.534 [2024-12-12 10:37:06.502109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.534 [2024-12-12 10:37:06.502122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155c410 with addr=10.0.0.2, port=4420 00:23:32.534 [2024-12-12 10:37:06.502129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c410 is same with the state(6) to be set 00:23:32.534 [2024-12-12 10:37:06.502140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c410 (9): Bad file descriptor 00:23:32.534 [2024-12-12 10:37:06.502156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:32.534 [2024-12-12 10:37:06.502163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:32.534 [2024-12-12 10:37:06.502170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:32.534 [2024-12-12 10:37:06.502175] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:32.534 [2024-12-12 10:37:06.502180] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:32.534 [2024-12-12 10:37:06.502184] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:32.534 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.535 [2024-12-12 10:37:06.511835] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:32.535 [2024-12-12 10:37:06.511848] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:32.535 [2024-12-12 10:37:06.511852] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:32.535 [2024-12-12 10:37:06.511857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:32.535 [2024-12-12 10:37:06.511869] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.535 [2024-12-12 10:37:06.512074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.535 [2024-12-12 10:37:06.512088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155c410 with addr=10.0.0.2, port=4420 00:23:32.535 [2024-12-12 10:37:06.512097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c410 is same with the state(6) to be set 00:23:32.535 [2024-12-12 10:37:06.512108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c410 (9): Bad file descriptor 00:23:32.535 [2024-12-12 10:37:06.512122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:32.535 [2024-12-12 10:37:06.512129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:32.535 [2024-12-12 10:37:06.512136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:32.535 [2024-12-12 10:37:06.512142] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:32.535 [2024-12-12 10:37:06.512147] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:32.535 [2024-12-12 10:37:06.512151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.535 [2024-12-12 10:37:06.521900] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:32.535 [2024-12-12 10:37:06.521913] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:32.535 [2024-12-12 10:37:06.521917] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:32.535 [2024-12-12 10:37:06.521922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:32.535 [2024-12-12 10:37:06.521935] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:32.535 [2024-12-12 10:37:06.522077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.535 [2024-12-12 10:37:06.522089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155c410 with addr=10.0.0.2, port=4420 00:23:32.535 [2024-12-12 10:37:06.522099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c410 is same with the state(6) to be set 00:23:32.535 [2024-12-12 10:37:06.522110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c410 (9): Bad file descriptor 00:23:32.535 [2024-12-12 10:37:06.522119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:32.535 [2024-12-12 10:37:06.522125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:32.535 [2024-12-12 10:37:06.522132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:32.535 [2024-12-12 10:37:06.522138] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:32.535 [2024-12-12 10:37:06.522142] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:32.535 [2024-12-12 10:37:06.522146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:32.535 [2024-12-12 10:37:06.531965] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:32.535 [2024-12-12 10:37:06.531975] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:32.535 [2024-12-12 10:37:06.531979] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:32.535 [2024-12-12 10:37:06.531984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:32.535 [2024-12-12 10:37:06.531996] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:32.535 [2024-12-12 10:37:06.532183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.535 [2024-12-12 10:37:06.532202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155c410 with addr=10.0.0.2, port=4420 00:23:32.535 [2024-12-12 10:37:06.532209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c410 is same with the state(6) to be set 00:23:32.535 [2024-12-12 10:37:06.532219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c410 (9): Bad file descriptor 00:23:32.535 [2024-12-12 10:37:06.532234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:32.535 [2024-12-12 10:37:06.532241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:32.535 [2024-12-12 10:37:06.532247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:32.535 [2024-12-12 10:37:06.532253] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:32.535 [2024-12-12 10:37:06.532257] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:32.535 [2024-12-12 10:37:06.532261] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:32.535 [2024-12-12 10:37:06.536819] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:32.535 [2024-12-12 10:37:06.536834] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:32.535 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:32.795 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.796 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:33.055 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:33.055 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:33.055 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:33.055 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.055 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.055 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.990 [2024-12-12 10:37:07.873743] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:33.990 [2024-12-12 10:37:07.873759] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:33.990 [2024-12-12 10:37:07.873771] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:33.990 [2024-12-12 10:37:07.960022] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:34.249 [2024-12-12 10:37:08.220158] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:34.249 [2024-12-12 10:37:08.220599] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1559d00:1 started. 00:23:34.249 [2024-12-12 10:37:08.222124] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:34.249 [2024-12-12 10:37:08.222148] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:34.249 [2024-12-12 10:37:08.223211] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1559d00 was disconnected and freed. delete nvme_qpair. 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.249 request: 00:23:34.249 { 00:23:34.249 "name": "nvme", 00:23:34.249 "trtype": "tcp", 00:23:34.249 "traddr": "10.0.0.2", 00:23:34.249 "adrfam": "ipv4", 00:23:34.249 "trsvcid": "8009", 00:23:34.249 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:34.249 "wait_for_attach": true, 00:23:34.249 "method": "bdev_nvme_start_discovery", 00:23:34.249 "req_id": 1 00:23:34.249 } 00:23:34.249 Got JSON-RPC error response 00:23:34.249 response: 00:23:34.249 { 00:23:34.249 "code": -17, 00:23:34.249 "message": "File exists" 00:23:34.249 } 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:34.249 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.508 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.509 request: 00:23:34.509 { 00:23:34.509 "name": "nvme_second", 00:23:34.509 "trtype": "tcp", 00:23:34.509 "traddr": "10.0.0.2", 00:23:34.509 "adrfam": "ipv4", 00:23:34.509 "trsvcid": "8009", 00:23:34.509 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:34.509 "wait_for_attach": true, 00:23:34.509 "method": "bdev_nvme_start_discovery", 00:23:34.509 "req_id": 1 00:23:34.509 } 00:23:34.509 Got JSON-RPC error response 00:23:34.509 response: 00:23:34.509 { 00:23:34.509 "code": -17, 00:23:34.509 "message": "File exists" 00:23:34.509 } 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.509 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.445 [2024-12-12 10:37:09.461549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.445 [2024-12-12 10:37:09.461584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155d130 with addr=10.0.0.2, port=8010 00:23:35.445 [2024-12-12 10:37:09.461599] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:35.445 [2024-12-12 10:37:09.461605] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:35.445 [2024-12-12 10:37:09.461611] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:36.822 [2024-12-12 10:37:10.463952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.822 [2024-12-12 10:37:10.463979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1576320 with addr=10.0.0.2, port=8010 00:23:36.822 [2024-12-12 10:37:10.463994] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:36.822 [2024-12-12 10:37:10.464017] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:36.822 [2024-12-12 10:37:10.464023] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:37.758 [2024-12-12 10:37:11.466164] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:37.758 request: 00:23:37.758 { 00:23:37.758 "name": "nvme_second", 00:23:37.758 "trtype": "tcp", 00:23:37.758 "traddr": "10.0.0.2", 00:23:37.758 "adrfam": "ipv4", 00:23:37.758 "trsvcid": "8010", 00:23:37.758 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:37.758 "wait_for_attach": false, 00:23:37.758 "attach_timeout_ms": 3000, 00:23:37.758 "method": "bdev_nvme_start_discovery", 00:23:37.758 "req_id": 1 00:23:37.758 } 00:23:37.758 Got JSON-RPC error response 00:23:37.758 response: 00:23:37.758 { 00:23:37.758 "code": -110, 00:23:37.758 "message": "Connection timed out" 00:23:37.758 } 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1619907 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:37.758 rmmod nvme_tcp 00:23:37.758 rmmod nvme_fabrics 00:23:37.758 rmmod nvme_keyring 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1619701 ']' 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1619701 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1619701 ']' 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1619701 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1619701 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1619701' 00:23:37.758 killing process with pid 1619701 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1619701 00:23:37.758 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1619701 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.017 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.922 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:39.922 00:23:39.922 real 0m17.433s 00:23:39.922 user 0m20.917s 00:23:39.922 sys 0m5.947s 00:23:39.922 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.922 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.922 ************************************ 00:23:39.922 END TEST nvmf_host_discovery 00:23:39.922 ************************************ 00:23:39.922 10:37:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:39.922 10:37:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:39.922 10:37:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.922 10:37:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.922 ************************************ 00:23:39.922 START TEST nvmf_host_multipath_status 00:23:39.922 ************************************ 00:23:39.922 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:40.182 * Looking for test storage... 00:23:40.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:40.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.182 --rc genhtml_branch_coverage=1 00:23:40.182 --rc genhtml_function_coverage=1 00:23:40.182 --rc genhtml_legend=1 00:23:40.182 --rc geninfo_all_blocks=1 00:23:40.182 --rc geninfo_unexecuted_blocks=1 00:23:40.182 00:23:40.182 ' 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:40.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.182 --rc genhtml_branch_coverage=1 00:23:40.182 --rc genhtml_function_coverage=1 00:23:40.182 --rc genhtml_legend=1 00:23:40.182 --rc geninfo_all_blocks=1 00:23:40.182 --rc geninfo_unexecuted_blocks=1 00:23:40.182 00:23:40.182 ' 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:40.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.182 --rc genhtml_branch_coverage=1 00:23:40.182 --rc genhtml_function_coverage=1 00:23:40.182 --rc genhtml_legend=1 00:23:40.182 --rc geninfo_all_blocks=1 00:23:40.182 --rc geninfo_unexecuted_blocks=1 00:23:40.182 00:23:40.182 ' 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:40.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.182 --rc genhtml_branch_coverage=1 00:23:40.182 --rc genhtml_function_coverage=1 00:23:40.182 --rc genhtml_legend=1 00:23:40.182 --rc geninfo_all_blocks=1 00:23:40.182 --rc geninfo_unexecuted_blocks=1 00:23:40.182 00:23:40.182 ' 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.182 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.183 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:46.753 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.753 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:46.754 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:46.754 Found net devices under 0000:af:00.0: cvl_0_0 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:46.754 Found net devices under 0000:af:00.1: cvl_0_1 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:23:46.754 00:23:46.754 --- 10.0.0.2 ping statistics --- 00:23:46.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.754 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:46.754 00:23:46.754 --- 10.0.0.1 ping statistics --- 00:23:46.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.754 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.754 10:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1624912 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1624912 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1624912 ']' 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 [2024-12-12 10:37:20.066365] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:23:46.754 [2024-12-12 10:37:20.066414] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.754 [2024-12-12 10:37:20.142882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:46.754 [2024-12-12 10:37:20.184045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.754 [2024-12-12 10:37:20.184080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.754 [2024-12-12 10:37:20.184087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.754 [2024-12-12 10:37:20.184094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.754 [2024-12-12 10:37:20.184101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.754 [2024-12-12 10:37:20.185141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.754 [2024-12-12 10:37:20.185142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1624912 00:23:46.754 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:46.754 [2024-12-12 10:37:20.505740] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.755 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:46.755 Malloc0 00:23:46.755 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:47.014 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.273 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.532 [2024-12-12 10:37:21.311103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:47.532 [2024-12-12 10:37:21.495545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1625164 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1625164 /var/tmp/bdevperf.sock 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1625164 ']' 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.532 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:47.791 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.791 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:47.791 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:48.049 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:48.308 Nvme0n1 00:23:48.308 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:48.875 Nvme0n1 00:23:48.875 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:48.876 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:50.780 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:50.780 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:51.038 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:51.296 10:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:52.232 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:52.232 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:52.232 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.232 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.491 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.491 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:52.491 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.491 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.750 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:52.750 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.750 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.750 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.009 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.009 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.009 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.009 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.268 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.268 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:53.268 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.268 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:53.268 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.268 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:53.268 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.268 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:53.527 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.527 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:53.527 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:53.786 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:54.045 10:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:54.981 10:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:54.981 10:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:54.981 10:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.981 10:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.240 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.240 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:55.240 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.240 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.499 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.499 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.499 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.499 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.757 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.757 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.758 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.758 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:55.758 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.758 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:55.758 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.758 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.016 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.016 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:56.016 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.016 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:56.275 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.275 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:56.275 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:56.534 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:56.793 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:57.728 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:57.728 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:57.728 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.728 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.987 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.987 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:57.987 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.987 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.245 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.245 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.245 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.245 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.245 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.245 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.245 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.245 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.504 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.504 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:58.504 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.504 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.762 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.762 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:58.763 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.763 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.021 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.021 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:59.021 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:59.280 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:59.280 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:00.656 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:00.656 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:00.656 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.656 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.656 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.656 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:00.656 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.656 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.915 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.915 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.915 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.915 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.915 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.915 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.915 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.915 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.172 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.173 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.173 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.173 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.436 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.436 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:01.436 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.436 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.694 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.694 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:01.694 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:01.952 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:01.952 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:03.327 10:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:03.327 10:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:03.327 10:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.327 10:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.327 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.327 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.327 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.327 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.586 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.586 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.586 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.586 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.586 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.586 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.587 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.587 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.845 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.845 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:03.845 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.845 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.104 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.104 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:04.104 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.104 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.362 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.362 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:04.362 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:04.621 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:04.621 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:05.652 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:05.652 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:05.653 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.653 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.017 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.017 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:06.017 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.017 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.280 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.280 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.280 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.280 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.280 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.280 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.280 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.280 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.539 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.539 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:06.539 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.539 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.797 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.797 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:06.797 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.797 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.056 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.056 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:07.056 10:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:07.056 10:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:07.314 10:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.572 10:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:08.508 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:08.508 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:08.508 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.508 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.766 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.766 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:08.766 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.766 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.025 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.025 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.025 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.025 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.284 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.284 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.284 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.284 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.543 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.543 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.543 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.543 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.802 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.802 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.802 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.802 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.802 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.802 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:09.802 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:10.061 10:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:10.319 10:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:11.257 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:11.257 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:11.257 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.257 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.516 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.516 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:11.516 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.516 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.775 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.775 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.775 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.775 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:12.034 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.034 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:12.034 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.034 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:12.034 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.034 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:12.034 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.034 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.292 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.292 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:12.292 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.292 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.552 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.552 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:12.552 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:12.810 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:13.069 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:14.006 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:14.006 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:14.006 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.006 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:14.265 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.265 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:14.265 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.265 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.265 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.265 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.265 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.265 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.523 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.523 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.523 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.523 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.782 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.782 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.782 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.782 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:15.041 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.041 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:15.041 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.041 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:15.300 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.300 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:15.300 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:15.300 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:15.559 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.937 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.196 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.196 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.196 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.196 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.455 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.455 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.455 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.455 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.713 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.713 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:17.713 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.713 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.972 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.972 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1625164 00:24:17.972 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1625164 ']' 00:24:17.973 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1625164 00:24:17.973 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:17.973 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.973 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1625164 00:24:17.973 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:17.973 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:17.973 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1625164' 00:24:17.973 killing process with pid 1625164 00:24:17.973 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1625164 00:24:17.973 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1625164 00:24:17.973 { 00:24:17.973 "results": [ 00:24:17.973 { 00:24:17.973 "job": "Nvme0n1", 00:24:17.973 "core_mask": "0x4", 00:24:17.973 "workload": "verify", 00:24:17.973 "status": "terminated", 00:24:17.973 "verify_range": { 00:24:17.973 "start": 0, 00:24:17.973 "length": 16384 00:24:17.973 }, 00:24:17.973 "queue_depth": 128, 00:24:17.973 "io_size": 4096, 00:24:17.973 "runtime": 28.969762, 00:24:17.973 "iops": 10778.37643264035, 00:24:17.973 "mibps": 42.10303294000137, 00:24:17.973 "io_failed": 0, 00:24:17.973 "io_timeout": 0, 00:24:17.973 "avg_latency_us": 11854.435491560635, 00:24:17.973 "min_latency_us": 155.06285714285715, 00:24:17.973 "max_latency_us": 3083812.083809524 00:24:17.973 } 00:24:17.973 ], 00:24:17.973 "core_count": 1 00:24:17.973 } 00:24:18.258 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1625164 00:24:18.258 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:18.258 [2024-12-12 10:37:21.555340] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:24:18.258 [2024-12-12 10:37:21.555393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625164 ] 00:24:18.258 [2024-12-12 10:37:21.626944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.258 [2024-12-12 10:37:21.668045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.258 Running I/O for 90 seconds... 00:24:18.258 11511.00 IOPS, 44.96 MiB/s [2024-12-12T09:37:52.281Z] 11626.50 IOPS, 45.42 MiB/s [2024-12-12T09:37:52.281Z] 11667.67 IOPS, 45.58 MiB/s [2024-12-12T09:37:52.281Z] 11684.50 IOPS, 45.64 MiB/s [2024-12-12T09:37:52.281Z] 11705.80 IOPS, 45.73 MiB/s [2024-12-12T09:37:52.281Z] 11700.67 IOPS, 45.71 MiB/s [2024-12-12T09:37:52.281Z] 11698.57 IOPS, 45.70 MiB/s [2024-12-12T09:37:52.281Z] 11681.38 IOPS, 45.63 MiB/s [2024-12-12T09:37:52.281Z] 11687.67 IOPS, 45.65 MiB/s [2024-12-12T09:37:52.281Z] 11680.10 IOPS, 45.63 MiB/s [2024-12-12T09:37:52.281Z] 11691.45 IOPS, 45.67 MiB/s [2024-12-12T09:37:52.281Z] 11692.25 IOPS, 45.67 MiB/s [2024-12-12T09:37:52.281Z] [2024-12-12 10:37:35.735457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.258 [2024-12-12 10:37:35.735494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.258 [2024-12-12 10:37:35.735513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.258 [2024-12-12 10:37:35.735536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.258 [2024-12-12 10:37:35.735550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.258 [2024-12-12 10:37:35.735557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.258 [2024-12-12 10:37:35.735575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.258 [2024-12-12 10:37:35.735582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.258 [2024-12-12 10:37:35.735595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.735986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.735999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.259 [2024-12-12 10:37:35.736118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.259 [2024-12-12 10:37:35.736137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.259 [2024-12-12 10:37:35.736629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.259 [2024-12-12 10:37:35.736641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.736866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.736873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.260 [2024-12-12 10:37:35.737379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.260 [2024-12-12 10:37:35.737798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.260 [2024-12-12 10:37:35.737807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.737819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.737826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.737838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.737845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.737857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.737864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.737877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.737883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.737896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.737903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.261 [2024-12-12 10:37:35.738052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.261 [2024-12-12 10:37:35.738072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.261 [2024-12-12 10:37:35.738091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.261 [2024-12-12 10:37:35.738110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.261 [2024-12-12 10:37:35.738129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.261 [2024-12-12 10:37:35.738147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.261 [2024-12-12 10:37:35.738168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.261 [2024-12-12 10:37:35.738837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.261 [2024-12-12 10:37:35.738844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.738856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.262 [2024-12-12 10:37:35.738862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.738874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.262 [2024-12-12 10:37:35.738881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.738893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.262 [2024-12-12 10:37:35.738899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.738911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.262 [2024-12-12 10:37:35.738918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.738931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.738937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.262 [2024-12-12 10:37:35.739699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.262 [2024-12-12 10:37:35.739719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.739751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.262 [2024-12-12 10:37:35.739758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.262 [2024-12-12 10:37:35.740048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.740305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.740318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.749927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.749945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.749954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.749967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.749975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.749988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.749996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.263 [2024-12-12 10:37:35.750595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.263 [2024-12-12 10:37:35.750608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.750616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.750637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.750658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.750679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.750700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.750989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.750996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.751061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.751082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.751102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.751124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.751144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.751165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.264 [2024-12-12 10:37:35.751186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.264 [2024-12-12 10:37:35.751439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.264 [2024-12-12 10:37:35.751452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.265 [2024-12-12 10:37:35.751868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.751888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.751909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.751930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.751951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.751972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.751986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.751993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.265 [2024-12-12 10:37:35.752290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.265 [2024-12-12 10:37:35.752303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.266 [2024-12-12 10:37:35.752541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.266 [2024-12-12 10:37:35.752564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.752582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.752590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.753958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.753965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.754206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.754217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.754231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.266 [2024-12-12 10:37:35.754241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.266 [2024-12-12 10:37:35.754254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.754518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.754857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.267 [2024-12-12 10:37:35.754864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.755154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.755166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.755181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.755189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.755202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.755210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.755223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.755231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.755245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.755252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.755266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.267 [2024-12-12 10:37:35.755273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.267 [2024-12-12 10:37:35.755287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.268 [2024-12-12 10:37:35.760933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.760954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.760967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.760984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.760993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.268 [2024-12-12 10:37:35.761758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.761774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.268 [2024-12-12 10:37:35.761783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.762214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.268 [2024-12-12 10:37:35.762229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.762247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.268 [2024-12-12 10:37:35.762256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.762272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.268 [2024-12-12 10:37:35.762281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.762297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.268 [2024-12-12 10:37:35.762306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.762325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.268 [2024-12-12 10:37:35.762334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.762351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.268 [2024-12-12 10:37:35.762359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.268 [2024-12-12 10:37:35.762375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.762969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.762986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.269 [2024-12-12 10:37:35.762994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.269 [2024-12-12 10:37:35.763020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.269 [2024-12-12 10:37:35.763362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.269 [2024-12-12 10:37:35.763371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.763975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.763984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.764008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.270 [2024-12-12 10:37:35.764034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.270 [2024-12-12 10:37:35.764378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.270 [2024-12-12 10:37:35.764387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.764404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.764412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.271 [2024-12-12 10:37:35.765393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.271 [2024-12-12 10:37:35.765421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.271 [2024-12-12 10:37:35.765446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.271 [2024-12-12 10:37:35.765472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.271 [2024-12-12 10:37:35.765497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.271 [2024-12-12 10:37:35.765522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.271 [2024-12-12 10:37:35.765547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.765974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.765983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.766001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.766026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.766049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.766062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.766087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.766099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.766122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.766134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.766157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.766169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.766192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.766205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.766227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.766239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.271 [2024-12-12 10:37:35.766262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.271 [2024-12-12 10:37:35.766274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.766297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.272 [2024-12-12 10:37:35.766310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.766332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.272 [2024-12-12 10:37:35.766344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.766367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.272 [2024-12-12 10:37:35.766379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.766402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.272 [2024-12-12 10:37:35.766414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.766437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.272 [2024-12-12 10:37:35.766449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.766472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.272 [2024-12-12 10:37:35.766484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.766507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.272 [2024-12-12 10:37:35.766522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.767972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.767986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.768010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.768045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.768057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.768080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.768092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.768115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.768127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.768149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.272 [2024-12-12 10:37:35.768163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.768185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.272 [2024-12-12 10:37:35.768198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.272 [2024-12-12 10:37:35.768220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.272 [2024-12-12 10:37:35.768232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.768978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.768991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.769369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.769383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.770371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.770393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.770418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.770431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.770453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.770466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.770488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.770500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.770523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.770535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.770558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.770584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.770608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.273 [2024-12-12 10:37:35.770622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.273 [2024-12-12 10:37:35.770647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.274 [2024-12-12 10:37:35.770662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.770691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.770704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.770728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.770741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.770764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.770777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.770799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.770813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.770838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.770851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.770874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.770886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.770909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.770922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.770948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.770960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.770983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.770996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.274 [2024-12-12 10:37:35.771276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.274 [2024-12-12 10:37:35.771311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.274 [2024-12-12 10:37:35.771345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.274 [2024-12-12 10:37:35.771380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.274 [2024-12-12 10:37:35.771415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.274 [2024-12-12 10:37:35.771450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.274 [2024-12-12 10:37:35.771484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.771965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.771980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.772002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.772015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.274 [2024-12-12 10:37:35.772038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.274 [2024-12-12 10:37:35.772050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.275 [2024-12-12 10:37:35.772622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.772657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.772693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.772728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.772762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.772797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.772832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.772857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.772870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.773850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.773873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.773898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.773910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.773933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.773946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.773969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.773982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.275 [2024-12-12 10:37:35.774428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.275 [2024-12-12 10:37:35.774440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.276 [2024-12-12 10:37:35.774730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.276 [2024-12-12 10:37:35.774766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.774977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.774999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.276 [2024-12-12 10:37:35.775669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.276 [2024-12-12 10:37:35.775682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.775704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.775717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.775739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.775751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.775773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.775786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.775808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.775821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.775844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.775856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.776509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.776534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.776556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.776591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.776613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.776635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.776658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.776681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.776703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.776991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.776999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.777021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.777044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.777066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.777089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.777111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.777136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.777159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.777182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.777204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.277 [2024-12-12 10:37:35.777227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.777249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.277 [2024-12-12 10:37:35.777272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.277 [2024-12-12 10:37:35.777286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.278 [2024-12-12 10:37:35.777957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.777981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.777996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.778004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.778018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.778026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.778041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.778049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.778063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.778071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.778086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.778094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.778714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.778728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.778745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.778753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.778768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.778776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.778790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.278 [2024-12-12 10:37:35.778798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.278 [2024-12-12 10:37:35.778813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.778821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.778835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.778843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.778858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.778870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.778884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.778892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.778907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.778915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.778929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.778937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.778952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.778960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.778975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.778983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.778997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.279 [2024-12-12 10:37:35.779300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.279 [2024-12-12 10:37:35.779322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.279 [2024-12-12 10:37:35.779637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.279 [2024-12-12 10:37:35.779645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.779978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.779988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.280 [2024-12-12 10:37:35.780804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.280 [2024-12-12 10:37:35.780827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.280 [2024-12-12 10:37:35.780853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.280 [2024-12-12 10:37:35.780876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.280 [2024-12-12 10:37:35.780898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.280 [2024-12-12 10:37:35.780921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.280 [2024-12-12 10:37:35.780943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.280 [2024-12-12 10:37:35.780966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.780981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.280 [2024-12-12 10:37:35.780989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.280 [2024-12-12 10:37:35.781003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.280 [2024-12-12 10:37:35.781011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.281 [2024-12-12 10:37:35.781202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.281 [2024-12-12 10:37:35.781226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.281 [2024-12-12 10:37:35.781251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.281 [2024-12-12 10:37:35.781274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.281 [2024-12-12 10:37:35.781299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.281 [2024-12-12 10:37:35.781323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.281 [2024-12-12 10:37:35.781346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.281 [2024-12-12 10:37:35.781896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.281 [2024-12-12 10:37:35.781904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.781919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.282 [2024-12-12 10:37:35.781927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.781941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.282 [2024-12-12 10:37:35.781949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.781964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.282 [2024-12-12 10:37:35.781972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.781987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.282 [2024-12-12 10:37:35.781996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.282 [2024-12-12 10:37:35.782019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.282 [2024-12-12 10:37:35.782042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.282 [2024-12-12 10:37:35.782064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.282 [2024-12-12 10:37:35.782087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.282 [2024-12-12 10:37:35.782110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.782985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.782993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.282 [2024-12-12 10:37:35.783421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.282 [2024-12-12 10:37:35.783435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.283 [2024-12-12 10:37:35.783467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.283 [2024-12-12 10:37:35.783490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.783988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.783996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.283 [2024-12-12 10:37:35.784867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.283 [2024-12-12 10:37:35.784875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.784890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.784898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.784913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.784924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.784939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.784947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.784961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.784969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.784984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.784992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.785014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.785404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.785427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.785449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.785473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.785496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.785519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.284 [2024-12-12 10:37:35.785541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.284 [2024-12-12 10:37:35.785769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.284 [2024-12-12 10:37:35.785777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.785800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.785823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.785846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.785869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.785891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.785914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.785936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.785958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.785982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.785996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-12-12 10:37:35.786279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.786983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.786995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.285 [2024-12-12 10:37:35.787002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.285 [2024-12-12 10:37:35.787014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.286 [2024-12-12 10:37:35.787401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.286 [2024-12-12 10:37:35.787420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.286 [2024-12-12 10:37:35.787727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.286 [2024-12-12 10:37:35.787740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.787945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.787951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.287 [2024-12-12 10:37:35.788650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.287 [2024-12-12 10:37:35.788953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.287 [2024-12-12 10:37:35.788965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.288 [2024-12-12 10:37:35.788972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.788983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.288 [2024-12-12 10:37:35.788990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.288 [2024-12-12 10:37:35.789009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.288 [2024-12-12 10:37:35.789263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.288 [2024-12-12 10:37:35.789286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.288 [2024-12-12 10:37:35.789304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.288 [2024-12-12 10:37:35.789323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.288 [2024-12-12 10:37:35.789930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.288 [2024-12-12 10:37:35.789948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.288 [2024-12-12 10:37:35.789960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.789967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.789979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.789986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.289 [2024-12-12 10:37:35.790812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.289 [2024-12-12 10:37:35.790834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.289 [2024-12-12 10:37:35.790884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.289 [2024-12-12 10:37:35.790890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.790902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.790909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.790920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.790927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.790939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.790946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.790957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.790964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.790976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.790983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.790995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.791983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.791995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.792002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.792014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.792023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.792035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.792042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.792055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.792065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.792077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.290 [2024-12-12 10:37:35.792084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.792096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.290 [2024-12-12 10:37:35.792104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.792116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.290 [2024-12-12 10:37:35.792123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.290 [2024-12-12 10:37:35.792135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.290 [2024-12-12 10:37:35.792141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.291 [2024-12-12 10:37:35.792407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.291 [2024-12-12 10:37:35.792426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.291 [2024-12-12 10:37:35.792444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.291 [2024-12-12 10:37:35.792716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.291 [2024-12-12 10:37:35.792738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.291 [2024-12-12 10:37:35.792757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.291 [2024-12-12 10:37:35.792776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.792983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.792995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.793002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.793014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.793021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.793034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.793041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.793053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.793059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.793072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.793079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.793091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.793098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.793110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.793117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.793129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.793135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.291 [2024-12-12 10:37:35.793147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.291 [2024-12-12 10:37:35.793154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.292 [2024-12-12 10:37:35.793383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.793991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.793998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.794017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.794036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.794055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.794074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.794093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.794114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.794132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.794151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.292 [2024-12-12 10:37:35.794169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:18.292 [2024-12-12 10:37:35.794182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.293 [2024-12-12 10:37:35.794264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.293 [2024-12-12 10:37:35.794282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.794983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.794990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.795004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.795011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.795028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.795035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:18.293 [2024-12-12 10:37:35.795049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.293 [2024-12-12 10:37:35.795063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.294 [2024-12-12 10:37:35.795848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.294 [2024-12-12 10:37:35.795896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.294 [2024-12-12 10:37:35.795912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.795919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.795935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.795942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.795959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.795968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.795984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.795992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.295 [2024-12-12 10:37:35.796611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.295 [2024-12-12 10:37:35.796634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:35.796651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.295 [2024-12-12 10:37:35.796658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:18.295 11561.38 IOPS, 45.16 MiB/s [2024-12-12T09:37:52.318Z] 10735.57 IOPS, 41.94 MiB/s [2024-12-12T09:37:52.318Z] 10019.87 IOPS, 39.14 MiB/s [2024-12-12T09:37:52.318Z] 9458.19 IOPS, 36.95 MiB/s [2024-12-12T09:37:52.318Z] 9582.12 IOPS, 37.43 MiB/s [2024-12-12T09:37:52.318Z] 9688.39 IOPS, 37.85 MiB/s [2024-12-12T09:37:52.318Z] 9849.63 IOPS, 38.48 MiB/s [2024-12-12T09:37:52.318Z] 10047.00 IOPS, 39.25 MiB/s [2024-12-12T09:37:52.318Z] 10226.33 IOPS, 39.95 MiB/s [2024-12-12T09:37:52.318Z] 10295.55 IOPS, 40.22 MiB/s [2024-12-12T09:37:52.318Z] 10349.65 IOPS, 40.43 MiB/s [2024-12-12T09:37:52.318Z] 10399.71 IOPS, 40.62 MiB/s [2024-12-12T09:37:52.318Z] 10528.08 IOPS, 41.13 MiB/s [2024-12-12T09:37:52.318Z] 10652.08 IOPS, 41.61 MiB/s [2024-12-12T09:37:52.318Z] [2024-12-12 10:37:49.513926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.295 [2024-12-12 10:37:49.513964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:49.514012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.295 [2024-12-12 10:37:49.514021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:49.514034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.295 [2024-12-12 10:37:49.514041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:49.514054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.295 [2024-12-12 10:37:49.514061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:49.514073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.295 [2024-12-12 10:37:49.514080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.295 [2024-12-12 10:37:49.514092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.295 [2024-12-12 10:37:49.514104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.514116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.514123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.514135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.514142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.514154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.514161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.296 [2024-12-12 10:37:49.515846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.296 [2024-12-12 10:37:49.515855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.515867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.297 [2024-12-12 10:37:49.515874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.515886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.297 [2024-12-12 10:37:49.515893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.515905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.297 [2024-12-12 10:37:49.515911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.515923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.297 [2024-12-12 10:37:49.515931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.515943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.297 [2024-12-12 10:37:49.515949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.515961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.297 [2024-12-12 10:37:49.515967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.515980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.297 [2024-12-12 10:37:49.515986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.515998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.297 [2024-12-12 10:37:49.516005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.516016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.297 [2024-12-12 10:37:49.516023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.297 [2024-12-12 10:37:49.516036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.297 [2024-12-12 10:37:49.516042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:18.297 10721.67 IOPS, 41.88 MiB/s [2024-12-12T09:37:52.320Z] 10745.89 IOPS, 41.98 MiB/s [2024-12-12T09:37:52.320Z] Received shutdown signal, test time was about 28.970432 seconds 00:24:18.297 00:24:18.297 Latency(us) 00:24:18.297 [2024-12-12T09:37:52.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.297 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:18.297 Verification LBA range: start 0x0 length 0x4000 00:24:18.297 Nvme0n1 : 28.97 10778.38 42.10 0.00 0.00 11854.44 155.06 3083812.08 00:24:18.297 [2024-12-12T09:37:52.320Z] =================================================================================================================== 00:24:18.297 [2024-12-12T09:37:52.320Z] Total : 10778.38 42.10 0.00 0.00 11854.44 155.06 3083812.08 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.297 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.556 rmmod nvme_tcp 00:24:18.556 rmmod nvme_fabrics 00:24:18.556 rmmod nvme_keyring 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1624912 ']' 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1624912 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1624912 ']' 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1624912 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1624912 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1624912' 00:24:18.556 killing process with pid 1624912 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1624912 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1624912 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.556 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.557 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:21.093 00:24:21.093 real 0m40.689s 00:24:21.093 user 1m50.492s 00:24:21.093 sys 0m11.580s 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:21.093 ************************************ 00:24:21.093 END TEST nvmf_host_multipath_status 00:24:21.093 ************************************ 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.093 ************************************ 00:24:21.093 START TEST nvmf_discovery_remove_ifc 00:24:21.093 ************************************ 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:21.093 * Looking for test storage... 00:24:21.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:21.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.093 --rc genhtml_branch_coverage=1 00:24:21.093 --rc genhtml_function_coverage=1 00:24:21.093 --rc genhtml_legend=1 00:24:21.093 --rc geninfo_all_blocks=1 00:24:21.093 --rc geninfo_unexecuted_blocks=1 00:24:21.093 00:24:21.093 ' 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:21.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.093 --rc genhtml_branch_coverage=1 00:24:21.093 --rc genhtml_function_coverage=1 00:24:21.093 --rc genhtml_legend=1 00:24:21.093 --rc geninfo_all_blocks=1 00:24:21.093 --rc geninfo_unexecuted_blocks=1 00:24:21.093 00:24:21.093 ' 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:21.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.093 --rc genhtml_branch_coverage=1 00:24:21.093 --rc genhtml_function_coverage=1 00:24:21.093 --rc genhtml_legend=1 00:24:21.093 --rc geninfo_all_blocks=1 00:24:21.093 --rc geninfo_unexecuted_blocks=1 00:24:21.093 00:24:21.093 ' 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:21.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.093 --rc genhtml_branch_coverage=1 00:24:21.093 --rc genhtml_function_coverage=1 00:24:21.093 --rc genhtml_legend=1 00:24:21.093 --rc geninfo_all_blocks=1 00:24:21.093 --rc geninfo_unexecuted_blocks=1 00:24:21.093 00:24:21.093 ' 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.093 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:21.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:21.094 10:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.664 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:27.665 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:27.665 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:27.665 Found net devices under 0000:af:00.0: cvl_0_0 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:27.665 Found net devices under 0000:af:00.1: cvl_0_1 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:24:27.665 00:24:27.665 --- 10.0.0.2 ping statistics --- 00:24:27.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.665 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:24:27.665 00:24:27.665 --- 10.0.0.1 ping statistics --- 00:24:27.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.665 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1633749 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1633749 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1633749 ']' 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.665 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.666 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.666 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.666 10:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 [2024-12-12 10:38:00.933172] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:24:27.666 [2024-12-12 10:38:00.933214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.666 [2024-12-12 10:38:01.007853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.666 [2024-12-12 10:38:01.046168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.666 [2024-12-12 10:38:01.046205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.666 [2024-12-12 10:38:01.046213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.666 [2024-12-12 10:38:01.046219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.666 [2024-12-12 10:38:01.046224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.666 [2024-12-12 10:38:01.046721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 [2024-12-12 10:38:01.188830] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.666 [2024-12-12 10:38:01.197010] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:27.666 null0 00:24:27.666 [2024-12-12 10:38:01.228991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1633775 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1633775 /tmp/host.sock 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1633775 ']' 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:27.666 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 [2024-12-12 10:38:01.298780] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:24:27.666 [2024-12-12 10:38:01.298821] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633775 ] 00:24:27.666 [2024-12-12 10:38:01.371928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.666 [2024-12-12 10:38:01.413808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.666 10:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.601 [2024-12-12 10:38:02.555201] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:28.601 [2024-12-12 10:38:02.555221] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:28.601 [2024-12-12 10:38:02.555235] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:28.859 [2024-12-12 10:38:02.641493] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:28.859 [2024-12-12 10:38:02.857543] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:28.859 [2024-12-12 10:38:02.858310] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa98a10:1 started. 00:24:28.859 [2024-12-12 10:38:02.859633] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:28.859 [2024-12-12 10:38:02.859674] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:28.859 [2024-12-12 10:38:02.859693] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:28.859 [2024-12-12 10:38:02.859705] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:28.859 [2024-12-12 10:38:02.859721] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:28.859 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.859 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:28.859 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.859 [2024-12-12 10:38:02.864479] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa98a10 was disconnected and freed. delete nvme_qpair. 00:24:28.859 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.859 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.859 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.859 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.859 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.859 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.117 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.117 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:29.117 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:29.117 10:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:29.117 10:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.053 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.053 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.053 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.053 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.053 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.053 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.053 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.053 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.311 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:30.311 10:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:31.248 10:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:32.185 10:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:33.561 10:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.497 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.497 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.497 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.497 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.497 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.497 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.497 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.497 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.497 [2024-12-12 10:38:08.301184] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:34.497 [2024-12-12 10:38:08.301224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.497 [2024-12-12 10:38:08.301251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.497 [2024-12-12 10:38:08.301261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.497 [2024-12-12 10:38:08.301268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.497 [2024-12-12 10:38:08.301275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.497 [2024-12-12 10:38:08.301282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.498 [2024-12-12 10:38:08.301289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.498 [2024-12-12 10:38:08.301296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.498 [2024-12-12 10:38:08.301303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.498 [2024-12-12 10:38:08.301310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.498 [2024-12-12 10:38:08.301316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa751d0 is same with the state(6) to be set 00:24:34.498 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.498 10:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.498 [2024-12-12 10:38:08.311205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa751d0 (9): Bad file descriptor 00:24:34.498 [2024-12-12 10:38:08.321241] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:34.498 [2024-12-12 10:38:08.321252] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:34.498 [2024-12-12 10:38:08.321262] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:34.498 [2024-12-12 10:38:08.321267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:34.498 [2024-12-12 10:38:08.321286] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.433 [2024-12-12 10:38:09.327636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:35.433 [2024-12-12 10:38:09.327718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa751d0 with addr=10.0.0.2, port=4420 00:24:35.433 [2024-12-12 10:38:09.327751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa751d0 is same with the state(6) to be set 00:24:35.433 [2024-12-12 10:38:09.327804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa751d0 (9): Bad file descriptor 00:24:35.433 [2024-12-12 10:38:09.328752] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:35.433 [2024-12-12 10:38:09.328815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:35.433 [2024-12-12 10:38:09.328838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:35.433 [2024-12-12 10:38:09.328861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:35.433 [2024-12-12 10:38:09.328881] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:35.433 [2024-12-12 10:38:09.328897] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:35.433 [2024-12-12 10:38:09.328910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:35.433 [2024-12-12 10:38:09.328931] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:35.433 [2024-12-12 10:38:09.328946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:35.433 10:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.369 [2024-12-12 10:38:10.331456] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:36.369 [2024-12-12 10:38:10.331480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:36.369 [2024-12-12 10:38:10.331493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:36.369 [2024-12-12 10:38:10.331499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:36.369 [2024-12-12 10:38:10.331507] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:36.369 [2024-12-12 10:38:10.331514] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:36.369 [2024-12-12 10:38:10.331523] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:36.369 [2024-12-12 10:38:10.331527] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:36.369 [2024-12-12 10:38:10.331548] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:36.369 [2024-12-12 10:38:10.331575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.369 [2024-12-12 10:38:10.331586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.369 [2024-12-12 10:38:10.331597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.369 [2024-12-12 10:38:10.331604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.369 [2024-12-12 10:38:10.331611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.369 [2024-12-12 10:38:10.331618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.369 [2024-12-12 10:38:10.331625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.369 [2024-12-12 10:38:10.331631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.369 [2024-12-12 10:38:10.331638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.369 [2024-12-12 10:38:10.331645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.369 [2024-12-12 10:38:10.331652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:36.369 [2024-12-12 10:38:10.332119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa64920 (9): Bad file descriptor 00:24:36.369 [2024-12-12 10:38:10.333130] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:36.369 [2024-12-12 10:38:10.333141] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:36.369 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.369 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.369 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.369 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.369 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.369 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.369 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.369 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.628 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:36.628 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.628 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.628 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:36.628 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.629 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.629 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.629 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.629 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.629 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.629 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.629 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.629 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:36.629 10:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.564 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.564 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.564 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.564 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.564 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.564 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.564 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.564 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.822 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:37.822 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.390 [2024-12-12 10:38:12.391075] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:38.390 [2024-12-12 10:38:12.391093] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:38.390 [2024-12-12 10:38:12.391106] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:38.649 [2024-12-12 10:38:12.518488] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.649 [2024-12-12 10:38:12.620066] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:38.649 [2024-12-12 10:38:12.620559] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xa77330:1 started. 00:24:38.649 [2024-12-12 10:38:12.621581] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:38.649 [2024-12-12 10:38:12.621617] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:38.649 [2024-12-12 10:38:12.621634] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:38.649 [2024-12-12 10:38:12.621647] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:38.649 [2024-12-12 10:38:12.621653] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:38.649 [2024-12-12 10:38:12.629276] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xa77330 was disconnected and freed. delete nvme_qpair. 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:38.649 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1633775 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1633775 ']' 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1633775 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1633775 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1633775' 00:24:40.027 killing process with pid 1633775 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1633775 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1633775 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.027 rmmod nvme_tcp 00:24:40.027 rmmod nvme_fabrics 00:24:40.027 rmmod nvme_keyring 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1633749 ']' 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1633749 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1633749 ']' 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1633749 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1633749 00:24:40.027 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:40.027 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:40.028 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1633749' 00:24:40.028 killing process with pid 1633749 00:24:40.028 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1633749 00:24:40.028 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1633749 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.287 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.272 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.272 00:24:42.272 real 0m21.549s 00:24:42.272 user 0m26.802s 00:24:42.272 sys 0m5.834s 00:24:42.272 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:42.272 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.272 ************************************ 00:24:42.272 END TEST nvmf_discovery_remove_ifc 00:24:42.272 ************************************ 00:24:42.272 10:38:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:42.272 10:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:42.272 10:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:42.272 10:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.532 ************************************ 00:24:42.532 START TEST nvmf_identify_kernel_target 00:24:42.532 ************************************ 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:42.532 * Looking for test storage... 00:24:42.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.532 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:42.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.533 --rc genhtml_branch_coverage=1 00:24:42.533 --rc genhtml_function_coverage=1 00:24:42.533 --rc genhtml_legend=1 00:24:42.533 --rc geninfo_all_blocks=1 00:24:42.533 --rc geninfo_unexecuted_blocks=1 00:24:42.533 00:24:42.533 ' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:42.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.533 --rc genhtml_branch_coverage=1 00:24:42.533 --rc genhtml_function_coverage=1 00:24:42.533 --rc genhtml_legend=1 00:24:42.533 --rc geninfo_all_blocks=1 00:24:42.533 --rc geninfo_unexecuted_blocks=1 00:24:42.533 00:24:42.533 ' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:42.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.533 --rc genhtml_branch_coverage=1 00:24:42.533 --rc genhtml_function_coverage=1 00:24:42.533 --rc genhtml_legend=1 00:24:42.533 --rc geninfo_all_blocks=1 00:24:42.533 --rc geninfo_unexecuted_blocks=1 00:24:42.533 00:24:42.533 ' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:42.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.533 --rc genhtml_branch_coverage=1 00:24:42.533 --rc genhtml_function_coverage=1 00:24:42.533 --rc genhtml_legend=1 00:24:42.533 --rc geninfo_all_blocks=1 00:24:42.533 --rc geninfo_unexecuted_blocks=1 00:24:42.533 00:24:42.533 ' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:42.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.533 10:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:49.102 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:49.102 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:49.102 Found net devices under 0000:af:00.0: cvl_0_0 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:49.102 Found net devices under 0000:af:00.1: cvl_0_1 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.102 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:24:49.103 00:24:49.103 --- 10.0.0.2 ping statistics --- 00:24:49.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.103 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:24:49.103 00:24:49.103 --- 10.0.0.1 ping statistics --- 00:24:49.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.103 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:49.103 10:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:51.639 Waiting for block devices as requested 00:24:51.639 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:51.639 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:51.639 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:51.639 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:51.639 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:51.639 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:51.639 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:51.898 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:51.898 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:51.898 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:52.157 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:52.157 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:52.157 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:52.157 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:52.416 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:52.416 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:52.416 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:52.675 No valid GPT data, bailing 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:52.675 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:52.676 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:52.676 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:52.676 00:24:52.676 Discovery Log Number of Records 2, Generation counter 2 00:24:52.676 =====Discovery Log Entry 0====== 00:24:52.676 trtype: tcp 00:24:52.676 adrfam: ipv4 00:24:52.676 subtype: current discovery subsystem 00:24:52.676 treq: not specified, sq flow control disable supported 00:24:52.676 portid: 1 00:24:52.676 trsvcid: 4420 00:24:52.676 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:52.676 traddr: 10.0.0.1 00:24:52.676 eflags: none 00:24:52.676 sectype: none 00:24:52.676 =====Discovery Log Entry 1====== 00:24:52.676 trtype: tcp 00:24:52.676 adrfam: ipv4 00:24:52.676 subtype: nvme subsystem 00:24:52.676 treq: not specified, sq flow control disable supported 00:24:52.676 portid: 1 00:24:52.676 trsvcid: 4420 00:24:52.676 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:52.676 traddr: 10.0.0.1 00:24:52.676 eflags: none 00:24:52.676 sectype: none 00:24:52.676 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:52.676 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:52.936 ===================================================== 00:24:52.936 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:52.936 ===================================================== 00:24:52.936 Controller Capabilities/Features 00:24:52.936 ================================ 00:24:52.936 Vendor ID: 0000 00:24:52.936 Subsystem Vendor ID: 0000 00:24:52.936 Serial Number: 5ed3c15a198f21aad907 00:24:52.936 Model Number: Linux 00:24:52.936 Firmware Version: 6.8.9-20 00:24:52.936 Recommended Arb Burst: 0 00:24:52.936 IEEE OUI Identifier: 00 00 00 00:24:52.936 Multi-path I/O 00:24:52.936 May have multiple subsystem ports: No 00:24:52.936 May have multiple controllers: No 00:24:52.936 Associated with SR-IOV VF: No 00:24:52.936 Max Data Transfer Size: Unlimited 00:24:52.936 Max Number of Namespaces: 0 00:24:52.936 Max Number of I/O Queues: 1024 00:24:52.936 NVMe Specification Version (VS): 1.3 00:24:52.936 NVMe Specification Version (Identify): 1.3 00:24:52.936 Maximum Queue Entries: 1024 00:24:52.936 Contiguous Queues Required: No 00:24:52.936 Arbitration Mechanisms Supported 00:24:52.936 Weighted Round Robin: Not Supported 00:24:52.936 Vendor Specific: Not Supported 00:24:52.936 Reset Timeout: 7500 ms 00:24:52.936 Doorbell Stride: 4 bytes 00:24:52.936 NVM Subsystem Reset: Not Supported 00:24:52.936 Command Sets Supported 00:24:52.936 NVM Command Set: Supported 00:24:52.936 Boot Partition: Not Supported 00:24:52.936 Memory Page Size Minimum: 4096 bytes 00:24:52.936 Memory Page Size Maximum: 4096 bytes 00:24:52.936 Persistent Memory Region: Not Supported 00:24:52.936 Optional Asynchronous Events Supported 00:24:52.936 Namespace Attribute Notices: Not Supported 00:24:52.936 Firmware Activation Notices: Not Supported 00:24:52.936 ANA Change Notices: Not Supported 00:24:52.936 PLE Aggregate Log Change Notices: Not Supported 00:24:52.936 LBA Status Info Alert Notices: Not Supported 00:24:52.936 EGE Aggregate Log Change Notices: Not Supported 00:24:52.936 Normal NVM Subsystem Shutdown event: Not Supported 00:24:52.936 Zone Descriptor Change Notices: Not Supported 00:24:52.936 Discovery Log Change Notices: Supported 00:24:52.936 Controller Attributes 00:24:52.936 128-bit Host Identifier: Not Supported 00:24:52.936 Non-Operational Permissive Mode: Not Supported 00:24:52.936 NVM Sets: Not Supported 00:24:52.936 Read Recovery Levels: Not Supported 00:24:52.936 Endurance Groups: Not Supported 00:24:52.936 Predictable Latency Mode: Not Supported 00:24:52.936 Traffic Based Keep ALive: Not Supported 00:24:52.936 Namespace Granularity: Not Supported 00:24:52.936 SQ Associations: Not Supported 00:24:52.936 UUID List: Not Supported 00:24:52.936 Multi-Domain Subsystem: Not Supported 00:24:52.936 Fixed Capacity Management: Not Supported 00:24:52.936 Variable Capacity Management: Not Supported 00:24:52.936 Delete Endurance Group: Not Supported 00:24:52.936 Delete NVM Set: Not Supported 00:24:52.936 Extended LBA Formats Supported: Not Supported 00:24:52.936 Flexible Data Placement Supported: Not Supported 00:24:52.936 00:24:52.936 Controller Memory Buffer Support 00:24:52.936 ================================ 00:24:52.936 Supported: No 00:24:52.936 00:24:52.936 Persistent Memory Region Support 00:24:52.936 ================================ 00:24:52.936 Supported: No 00:24:52.936 00:24:52.936 Admin Command Set Attributes 00:24:52.936 ============================ 00:24:52.936 Security Send/Receive: Not Supported 00:24:52.936 Format NVM: Not Supported 00:24:52.936 Firmware Activate/Download: Not Supported 00:24:52.936 Namespace Management: Not Supported 00:24:52.936 Device Self-Test: Not Supported 00:24:52.936 Directives: Not Supported 00:24:52.936 NVMe-MI: Not Supported 00:24:52.936 Virtualization Management: Not Supported 00:24:52.936 Doorbell Buffer Config: Not Supported 00:24:52.936 Get LBA Status Capability: Not Supported 00:24:52.936 Command & Feature Lockdown Capability: Not Supported 00:24:52.936 Abort Command Limit: 1 00:24:52.936 Async Event Request Limit: 1 00:24:52.936 Number of Firmware Slots: N/A 00:24:52.936 Firmware Slot 1 Read-Only: N/A 00:24:52.936 Firmware Activation Without Reset: N/A 00:24:52.936 Multiple Update Detection Support: N/A 00:24:52.936 Firmware Update Granularity: No Information Provided 00:24:52.936 Per-Namespace SMART Log: No 00:24:52.936 Asymmetric Namespace Access Log Page: Not Supported 00:24:52.936 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:52.936 Command Effects Log Page: Not Supported 00:24:52.936 Get Log Page Extended Data: Supported 00:24:52.936 Telemetry Log Pages: Not Supported 00:24:52.936 Persistent Event Log Pages: Not Supported 00:24:52.936 Supported Log Pages Log Page: May Support 00:24:52.936 Commands Supported & Effects Log Page: Not Supported 00:24:52.936 Feature Identifiers & Effects Log Page:May Support 00:24:52.936 NVMe-MI Commands & Effects Log Page: May Support 00:24:52.936 Data Area 4 for Telemetry Log: Not Supported 00:24:52.936 Error Log Page Entries Supported: 1 00:24:52.936 Keep Alive: Not Supported 00:24:52.936 00:24:52.936 NVM Command Set Attributes 00:24:52.936 ========================== 00:24:52.936 Submission Queue Entry Size 00:24:52.936 Max: 1 00:24:52.936 Min: 1 00:24:52.936 Completion Queue Entry Size 00:24:52.936 Max: 1 00:24:52.936 Min: 1 00:24:52.936 Number of Namespaces: 0 00:24:52.936 Compare Command: Not Supported 00:24:52.936 Write Uncorrectable Command: Not Supported 00:24:52.936 Dataset Management Command: Not Supported 00:24:52.936 Write Zeroes Command: Not Supported 00:24:52.936 Set Features Save Field: Not Supported 00:24:52.936 Reservations: Not Supported 00:24:52.936 Timestamp: Not Supported 00:24:52.936 Copy: Not Supported 00:24:52.936 Volatile Write Cache: Not Present 00:24:52.936 Atomic Write Unit (Normal): 1 00:24:52.936 Atomic Write Unit (PFail): 1 00:24:52.936 Atomic Compare & Write Unit: 1 00:24:52.936 Fused Compare & Write: Not Supported 00:24:52.936 Scatter-Gather List 00:24:52.936 SGL Command Set: Supported 00:24:52.936 SGL Keyed: Not Supported 00:24:52.937 SGL Bit Bucket Descriptor: Not Supported 00:24:52.937 SGL Metadata Pointer: Not Supported 00:24:52.937 Oversized SGL: Not Supported 00:24:52.937 SGL Metadata Address: Not Supported 00:24:52.937 SGL Offset: Supported 00:24:52.937 Transport SGL Data Block: Not Supported 00:24:52.937 Replay Protected Memory Block: Not Supported 00:24:52.937 00:24:52.937 Firmware Slot Information 00:24:52.937 ========================= 00:24:52.937 Active slot: 0 00:24:52.937 00:24:52.937 00:24:52.937 Error Log 00:24:52.937 ========= 00:24:52.937 00:24:52.937 Active Namespaces 00:24:52.937 ================= 00:24:52.937 Discovery Log Page 00:24:52.937 ================== 00:24:52.937 Generation Counter: 2 00:24:52.937 Number of Records: 2 00:24:52.937 Record Format: 0 00:24:52.937 00:24:52.937 Discovery Log Entry 0 00:24:52.937 ---------------------- 00:24:52.937 Transport Type: 3 (TCP) 00:24:52.937 Address Family: 1 (IPv4) 00:24:52.937 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:52.937 Entry Flags: 00:24:52.937 Duplicate Returned Information: 0 00:24:52.937 Explicit Persistent Connection Support for Discovery: 0 00:24:52.937 Transport Requirements: 00:24:52.937 Secure Channel: Not Specified 00:24:52.937 Port ID: 1 (0x0001) 00:24:52.937 Controller ID: 65535 (0xffff) 00:24:52.937 Admin Max SQ Size: 32 00:24:52.937 Transport Service Identifier: 4420 00:24:52.937 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:52.937 Transport Address: 10.0.0.1 00:24:52.937 Discovery Log Entry 1 00:24:52.937 ---------------------- 00:24:52.937 Transport Type: 3 (TCP) 00:24:52.937 Address Family: 1 (IPv4) 00:24:52.937 Subsystem Type: 2 (NVM Subsystem) 00:24:52.937 Entry Flags: 00:24:52.937 Duplicate Returned Information: 0 00:24:52.937 Explicit Persistent Connection Support for Discovery: 0 00:24:52.937 Transport Requirements: 00:24:52.937 Secure Channel: Not Specified 00:24:52.937 Port ID: 1 (0x0001) 00:24:52.937 Controller ID: 65535 (0xffff) 00:24:52.937 Admin Max SQ Size: 32 00:24:52.937 Transport Service Identifier: 4420 00:24:52.937 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:52.937 Transport Address: 10.0.0.1 00:24:52.937 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.937 get_feature(0x01) failed 00:24:52.937 get_feature(0x02) failed 00:24:52.937 get_feature(0x04) failed 00:24:52.937 ===================================================== 00:24:52.937 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:52.937 ===================================================== 00:24:52.937 Controller Capabilities/Features 00:24:52.937 ================================ 00:24:52.937 Vendor ID: 0000 00:24:52.937 Subsystem Vendor ID: 0000 00:24:52.937 Serial Number: 9b23d25ee63352be81db 00:24:52.937 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:52.937 Firmware Version: 6.8.9-20 00:24:52.937 Recommended Arb Burst: 6 00:24:52.937 IEEE OUI Identifier: 00 00 00 00:24:52.937 Multi-path I/O 00:24:52.937 May have multiple subsystem ports: Yes 00:24:52.937 May have multiple controllers: Yes 00:24:52.937 Associated with SR-IOV VF: No 00:24:52.937 Max Data Transfer Size: Unlimited 00:24:52.937 Max Number of Namespaces: 1024 00:24:52.937 Max Number of I/O Queues: 128 00:24:52.937 NVMe Specification Version (VS): 1.3 00:24:52.937 NVMe Specification Version (Identify): 1.3 00:24:52.937 Maximum Queue Entries: 1024 00:24:52.937 Contiguous Queues Required: No 00:24:52.937 Arbitration Mechanisms Supported 00:24:52.937 Weighted Round Robin: Not Supported 00:24:52.937 Vendor Specific: Not Supported 00:24:52.937 Reset Timeout: 7500 ms 00:24:52.937 Doorbell Stride: 4 bytes 00:24:52.937 NVM Subsystem Reset: Not Supported 00:24:52.937 Command Sets Supported 00:24:52.937 NVM Command Set: Supported 00:24:52.937 Boot Partition: Not Supported 00:24:52.937 Memory Page Size Minimum: 4096 bytes 00:24:52.937 Memory Page Size Maximum: 4096 bytes 00:24:52.937 Persistent Memory Region: Not Supported 00:24:52.937 Optional Asynchronous Events Supported 00:24:52.937 Namespace Attribute Notices: Supported 00:24:52.937 Firmware Activation Notices: Not Supported 00:24:52.937 ANA Change Notices: Supported 00:24:52.937 PLE Aggregate Log Change Notices: Not Supported 00:24:52.937 LBA Status Info Alert Notices: Not Supported 00:24:52.937 EGE Aggregate Log Change Notices: Not Supported 00:24:52.937 Normal NVM Subsystem Shutdown event: Not Supported 00:24:52.937 Zone Descriptor Change Notices: Not Supported 00:24:52.937 Discovery Log Change Notices: Not Supported 00:24:52.937 Controller Attributes 00:24:52.937 128-bit Host Identifier: Supported 00:24:52.937 Non-Operational Permissive Mode: Not Supported 00:24:52.937 NVM Sets: Not Supported 00:24:52.937 Read Recovery Levels: Not Supported 00:24:52.937 Endurance Groups: Not Supported 00:24:52.937 Predictable Latency Mode: Not Supported 00:24:52.937 Traffic Based Keep ALive: Supported 00:24:52.937 Namespace Granularity: Not Supported 00:24:52.937 SQ Associations: Not Supported 00:24:52.937 UUID List: Not Supported 00:24:52.937 Multi-Domain Subsystem: Not Supported 00:24:52.937 Fixed Capacity Management: Not Supported 00:24:52.937 Variable Capacity Management: Not Supported 00:24:52.937 Delete Endurance Group: Not Supported 00:24:52.937 Delete NVM Set: Not Supported 00:24:52.937 Extended LBA Formats Supported: Not Supported 00:24:52.937 Flexible Data Placement Supported: Not Supported 00:24:52.937 00:24:52.937 Controller Memory Buffer Support 00:24:52.937 ================================ 00:24:52.937 Supported: No 00:24:52.937 00:24:52.937 Persistent Memory Region Support 00:24:52.937 ================================ 00:24:52.937 Supported: No 00:24:52.937 00:24:52.937 Admin Command Set Attributes 00:24:52.937 ============================ 00:24:52.937 Security Send/Receive: Not Supported 00:24:52.937 Format NVM: Not Supported 00:24:52.937 Firmware Activate/Download: Not Supported 00:24:52.937 Namespace Management: Not Supported 00:24:52.937 Device Self-Test: Not Supported 00:24:52.937 Directives: Not Supported 00:24:52.937 NVMe-MI: Not Supported 00:24:52.937 Virtualization Management: Not Supported 00:24:52.937 Doorbell Buffer Config: Not Supported 00:24:52.937 Get LBA Status Capability: Not Supported 00:24:52.937 Command & Feature Lockdown Capability: Not Supported 00:24:52.937 Abort Command Limit: 4 00:24:52.937 Async Event Request Limit: 4 00:24:52.937 Number of Firmware Slots: N/A 00:24:52.937 Firmware Slot 1 Read-Only: N/A 00:24:52.937 Firmware Activation Without Reset: N/A 00:24:52.937 Multiple Update Detection Support: N/A 00:24:52.937 Firmware Update Granularity: No Information Provided 00:24:52.937 Per-Namespace SMART Log: Yes 00:24:52.937 Asymmetric Namespace Access Log Page: Supported 00:24:52.937 ANA Transition Time : 10 sec 00:24:52.937 00:24:52.937 Asymmetric Namespace Access Capabilities 00:24:52.937 ANA Optimized State : Supported 00:24:52.937 ANA Non-Optimized State : Supported 00:24:52.937 ANA Inaccessible State : Supported 00:24:52.937 ANA Persistent Loss State : Supported 00:24:52.937 ANA Change State : Supported 00:24:52.937 ANAGRPID is not changed : No 00:24:52.937 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:52.937 00:24:52.937 ANA Group Identifier Maximum : 128 00:24:52.937 Number of ANA Group Identifiers : 128 00:24:52.937 Max Number of Allowed Namespaces : 1024 00:24:52.937 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:52.937 Command Effects Log Page: Supported 00:24:52.937 Get Log Page Extended Data: Supported 00:24:52.937 Telemetry Log Pages: Not Supported 00:24:52.937 Persistent Event Log Pages: Not Supported 00:24:52.937 Supported Log Pages Log Page: May Support 00:24:52.937 Commands Supported & Effects Log Page: Not Supported 00:24:52.937 Feature Identifiers & Effects Log Page:May Support 00:24:52.937 NVMe-MI Commands & Effects Log Page: May Support 00:24:52.937 Data Area 4 for Telemetry Log: Not Supported 00:24:52.937 Error Log Page Entries Supported: 128 00:24:52.937 Keep Alive: Supported 00:24:52.937 Keep Alive Granularity: 1000 ms 00:24:52.937 00:24:52.937 NVM Command Set Attributes 00:24:52.937 ========================== 00:24:52.937 Submission Queue Entry Size 00:24:52.937 Max: 64 00:24:52.937 Min: 64 00:24:52.937 Completion Queue Entry Size 00:24:52.937 Max: 16 00:24:52.937 Min: 16 00:24:52.937 Number of Namespaces: 1024 00:24:52.937 Compare Command: Not Supported 00:24:52.937 Write Uncorrectable Command: Not Supported 00:24:52.937 Dataset Management Command: Supported 00:24:52.937 Write Zeroes Command: Supported 00:24:52.937 Set Features Save Field: Not Supported 00:24:52.937 Reservations: Not Supported 00:24:52.937 Timestamp: Not Supported 00:24:52.937 Copy: Not Supported 00:24:52.937 Volatile Write Cache: Present 00:24:52.937 Atomic Write Unit (Normal): 1 00:24:52.937 Atomic Write Unit (PFail): 1 00:24:52.937 Atomic Compare & Write Unit: 1 00:24:52.937 Fused Compare & Write: Not Supported 00:24:52.937 Scatter-Gather List 00:24:52.937 SGL Command Set: Supported 00:24:52.937 SGL Keyed: Not Supported 00:24:52.938 SGL Bit Bucket Descriptor: Not Supported 00:24:52.938 SGL Metadata Pointer: Not Supported 00:24:52.938 Oversized SGL: Not Supported 00:24:52.938 SGL Metadata Address: Not Supported 00:24:52.938 SGL Offset: Supported 00:24:52.938 Transport SGL Data Block: Not Supported 00:24:52.938 Replay Protected Memory Block: Not Supported 00:24:52.938 00:24:52.938 Firmware Slot Information 00:24:52.938 ========================= 00:24:52.938 Active slot: 0 00:24:52.938 00:24:52.938 Asymmetric Namespace Access 00:24:52.938 =========================== 00:24:52.938 Change Count : 0 00:24:52.938 Number of ANA Group Descriptors : 1 00:24:52.938 ANA Group Descriptor : 0 00:24:52.938 ANA Group ID : 1 00:24:52.938 Number of NSID Values : 1 00:24:52.938 Change Count : 0 00:24:52.938 ANA State : 1 00:24:52.938 Namespace Identifier : 1 00:24:52.938 00:24:52.938 Commands Supported and Effects 00:24:52.938 ============================== 00:24:52.938 Admin Commands 00:24:52.938 -------------- 00:24:52.938 Get Log Page (02h): Supported 00:24:52.938 Identify (06h): Supported 00:24:52.938 Abort (08h): Supported 00:24:52.938 Set Features (09h): Supported 00:24:52.938 Get Features (0Ah): Supported 00:24:52.938 Asynchronous Event Request (0Ch): Supported 00:24:52.938 Keep Alive (18h): Supported 00:24:52.938 I/O Commands 00:24:52.938 ------------ 00:24:52.938 Flush (00h): Supported 00:24:52.938 Write (01h): Supported LBA-Change 00:24:52.938 Read (02h): Supported 00:24:52.938 Write Zeroes (08h): Supported LBA-Change 00:24:52.938 Dataset Management (09h): Supported 00:24:52.938 00:24:52.938 Error Log 00:24:52.938 ========= 00:24:52.938 Entry: 0 00:24:52.938 Error Count: 0x3 00:24:52.938 Submission Queue Id: 0x0 00:24:52.938 Command Id: 0x5 00:24:52.938 Phase Bit: 0 00:24:52.938 Status Code: 0x2 00:24:52.938 Status Code Type: 0x0 00:24:52.938 Do Not Retry: 1 00:24:52.938 Error Location: 0x28 00:24:52.938 LBA: 0x0 00:24:52.938 Namespace: 0x0 00:24:52.938 Vendor Log Page: 0x0 00:24:52.938 ----------- 00:24:52.938 Entry: 1 00:24:52.938 Error Count: 0x2 00:24:52.938 Submission Queue Id: 0x0 00:24:52.938 Command Id: 0x5 00:24:52.938 Phase Bit: 0 00:24:52.938 Status Code: 0x2 00:24:52.938 Status Code Type: 0x0 00:24:52.938 Do Not Retry: 1 00:24:52.938 Error Location: 0x28 00:24:52.938 LBA: 0x0 00:24:52.938 Namespace: 0x0 00:24:52.938 Vendor Log Page: 0x0 00:24:52.938 ----------- 00:24:52.938 Entry: 2 00:24:52.938 Error Count: 0x1 00:24:52.938 Submission Queue Id: 0x0 00:24:52.938 Command Id: 0x4 00:24:52.938 Phase Bit: 0 00:24:52.938 Status Code: 0x2 00:24:52.938 Status Code Type: 0x0 00:24:52.938 Do Not Retry: 1 00:24:52.938 Error Location: 0x28 00:24:52.938 LBA: 0x0 00:24:52.938 Namespace: 0x0 00:24:52.938 Vendor Log Page: 0x0 00:24:52.938 00:24:52.938 Number of Queues 00:24:52.938 ================ 00:24:52.938 Number of I/O Submission Queues: 128 00:24:52.938 Number of I/O Completion Queues: 128 00:24:52.938 00:24:52.938 ZNS Specific Controller Data 00:24:52.938 ============================ 00:24:52.938 Zone Append Size Limit: 0 00:24:52.938 00:24:52.938 00:24:52.938 Active Namespaces 00:24:52.938 ================= 00:24:52.938 get_feature(0x05) failed 00:24:52.938 Namespace ID:1 00:24:52.938 Command Set Identifier: NVM (00h) 00:24:52.938 Deallocate: Supported 00:24:52.938 Deallocated/Unwritten Error: Not Supported 00:24:52.938 Deallocated Read Value: Unknown 00:24:52.938 Deallocate in Write Zeroes: Not Supported 00:24:52.938 Deallocated Guard Field: 0xFFFF 00:24:52.938 Flush: Supported 00:24:52.938 Reservation: Not Supported 00:24:52.938 Namespace Sharing Capabilities: Multiple Controllers 00:24:52.938 Size (in LBAs): 1953525168 (931GiB) 00:24:52.938 Capacity (in LBAs): 1953525168 (931GiB) 00:24:52.938 Utilization (in LBAs): 1953525168 (931GiB) 00:24:52.938 UUID: fdaf2c36-7313-4304-a626-30923385bd17 00:24:52.938 Thin Provisioning: Not Supported 00:24:52.938 Per-NS Atomic Units: Yes 00:24:52.938 Atomic Boundary Size (Normal): 0 00:24:52.938 Atomic Boundary Size (PFail): 0 00:24:52.938 Atomic Boundary Offset: 0 00:24:52.938 NGUID/EUI64 Never Reused: No 00:24:52.938 ANA group ID: 1 00:24:52.938 Namespace Write Protected: No 00:24:52.938 Number of LBA Formats: 1 00:24:52.938 Current LBA Format: LBA Format #00 00:24:52.938 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:52.938 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.938 rmmod nvme_tcp 00:24:52.938 rmmod nvme_fabrics 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.938 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:55.473 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:55.473 10:38:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:58.009 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:58.009 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:58.947 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:58.947 00:24:58.947 real 0m16.510s 00:24:58.947 user 0m4.320s 00:24:58.947 sys 0m8.610s 00:24:58.947 10:38:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.947 10:38:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.947 ************************************ 00:24:58.947 END TEST nvmf_identify_kernel_target 00:24:58.947 ************************************ 00:24:58.947 10:38:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:58.947 10:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.947 10:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.947 10:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.947 ************************************ 00:24:58.947 START TEST nvmf_auth_host 00:24:58.947 ************************************ 00:24:58.947 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:59.206 * Looking for test storage... 00:24:59.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.206 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:59.206 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:59.206 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:59.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.206 --rc genhtml_branch_coverage=1 00:24:59.206 --rc genhtml_function_coverage=1 00:24:59.206 --rc genhtml_legend=1 00:24:59.206 --rc geninfo_all_blocks=1 00:24:59.206 --rc geninfo_unexecuted_blocks=1 00:24:59.206 00:24:59.206 ' 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:59.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.206 --rc genhtml_branch_coverage=1 00:24:59.206 --rc genhtml_function_coverage=1 00:24:59.206 --rc genhtml_legend=1 00:24:59.206 --rc geninfo_all_blocks=1 00:24:59.206 --rc geninfo_unexecuted_blocks=1 00:24:59.206 00:24:59.206 ' 00:24:59.206 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:59.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.206 --rc genhtml_branch_coverage=1 00:24:59.207 --rc genhtml_function_coverage=1 00:24:59.207 --rc genhtml_legend=1 00:24:59.207 --rc geninfo_all_blocks=1 00:24:59.207 --rc geninfo_unexecuted_blocks=1 00:24:59.207 00:24:59.207 ' 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:59.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.207 --rc genhtml_branch_coverage=1 00:24:59.207 --rc genhtml_function_coverage=1 00:24:59.207 --rc genhtml_legend=1 00:24:59.207 --rc geninfo_all_blocks=1 00:24:59.207 --rc geninfo_unexecuted_blocks=1 00:24:59.207 00:24:59.207 ' 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.207 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:05.777 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:05.777 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.777 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:05.778 Found net devices under 0000:af:00.0: cvl_0_0 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:05.778 Found net devices under 0000:af:00.1: cvl_0_1 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:25:05.778 00:25:05.778 --- 10.0.0.2 ping statistics --- 00:25:05.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.778 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:05.778 00:25:05.778 --- 10.0.0.1 ping statistics --- 00:25:05.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.778 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1645772 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1645772 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1645772 ']' 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.778 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=93fbd3d8c92ad9f68a9315a92dfa17b8 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Umk 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 93fbd3d8c92ad9f68a9315a92dfa17b8 0 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 93fbd3d8c92ad9f68a9315a92dfa17b8 0 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=93fbd3d8c92ad9f68a9315a92dfa17b8 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Umk 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Umk 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Umk 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:05.778 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f072cfa1078f9602b52c397be30e1c86e99332dc7bbf1abcb260c5eab322a6ee 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.i06 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f072cfa1078f9602b52c397be30e1c86e99332dc7bbf1abcb260c5eab322a6ee 3 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f072cfa1078f9602b52c397be30e1c86e99332dc7bbf1abcb260c5eab322a6ee 3 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f072cfa1078f9602b52c397be30e1c86e99332dc7bbf1abcb260c5eab322a6ee 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.i06 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.i06 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.i06 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c9b871bc3be97a0cf0acc0434928e5f7cd98cbb111373165 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Qi0 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c9b871bc3be97a0cf0acc0434928e5f7cd98cbb111373165 0 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c9b871bc3be97a0cf0acc0434928e5f7cd98cbb111373165 0 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c9b871bc3be97a0cf0acc0434928e5f7cd98cbb111373165 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Qi0 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Qi0 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Qi0 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4fcb9e5441d936499822e3b597c773344d32e3a26f54b7ac 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WJQ 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4fcb9e5441d936499822e3b597c773344d32e3a26f54b7ac 2 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4fcb9e5441d936499822e3b597c773344d32e3a26f54b7ac 2 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4fcb9e5441d936499822e3b597c773344d32e3a26f54b7ac 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WJQ 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WJQ 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.WJQ 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=57db06789238154b7e1408524df7bc39 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.KWB 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 57db06789238154b7e1408524df7bc39 1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 57db06789238154b7e1408524df7bc39 1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=57db06789238154b7e1408524df7bc39 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.KWB 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.KWB 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.KWB 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5005b00456f065a5241cec0fc307fb0b 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.71v 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5005b00456f065a5241cec0fc307fb0b 1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5005b00456f065a5241cec0fc307fb0b 1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5005b00456f065a5241cec0fc307fb0b 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.71v 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.71v 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.71v 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=54788558e8378b390cd56391c40e0925af8f1031e3385f92 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.EGT 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 54788558e8378b390cd56391c40e0925af8f1031e3385f92 2 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 54788558e8378b390cd56391c40e0925af8f1031e3385f92 2 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=54788558e8378b390cd56391c40e0925af8f1031e3385f92 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.EGT 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.EGT 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.EGT 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.779 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ece4d14715c01236849a9fe6a50f266d 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fub 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ece4d14715c01236849a9fe6a50f266d 0 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ece4d14715c01236849a9fe6a50f266d 0 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ece4d14715c01236849a9fe6a50f266d 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fub 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fub 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fub 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=685afb0a9a69c3cd227adb5b62db1986a5be957dececf47c97333b700a5abd73 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ZKG 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 685afb0a9a69c3cd227adb5b62db1986a5be957dececf47c97333b700a5abd73 3 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 685afb0a9a69c3cd227adb5b62db1986a5be957dececf47c97333b700a5abd73 3 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=685afb0a9a69c3cd227adb5b62db1986a5be957dececf47c97333b700a5abd73 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ZKG 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ZKG 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ZKG 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1645772 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1645772 ']' 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.780 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Umk 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.i06 ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.i06 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Qi0 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.WJQ ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WJQ 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.KWB 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.71v ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.71v 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.EGT 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fub ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fub 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZKG 00:25:06.039 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:06.040 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:06.040 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:06.040 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:08.572 Waiting for block devices as requested 00:25:08.831 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:08.831 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:08.831 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:09.090 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:09.090 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:09.090 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:09.090 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:09.348 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:09.348 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:09.348 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:09.607 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:09.607 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:09.607 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:09.607 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:09.865 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:09.865 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:09.865 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:10.432 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:10.432 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:10.432 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:10.432 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:10.432 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:10.432 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:10.432 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:10.432 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:10.432 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:10.692 No valid GPT data, bailing 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:10.692 00:25:10.692 Discovery Log Number of Records 2, Generation counter 2 00:25:10.692 =====Discovery Log Entry 0====== 00:25:10.692 trtype: tcp 00:25:10.692 adrfam: ipv4 00:25:10.692 subtype: current discovery subsystem 00:25:10.692 treq: not specified, sq flow control disable supported 00:25:10.692 portid: 1 00:25:10.692 trsvcid: 4420 00:25:10.692 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:10.692 traddr: 10.0.0.1 00:25:10.692 eflags: none 00:25:10.692 sectype: none 00:25:10.692 =====Discovery Log Entry 1====== 00:25:10.692 trtype: tcp 00:25:10.692 adrfam: ipv4 00:25:10.692 subtype: nvme subsystem 00:25:10.692 treq: not specified, sq flow control disable supported 00:25:10.692 portid: 1 00:25:10.692 trsvcid: 4420 00:25:10.692 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:10.692 traddr: 10.0.0.1 00:25:10.692 eflags: none 00:25:10.692 sectype: none 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.692 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.693 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.693 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 nvme0n1 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.952 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.211 nvme0n1 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.211 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.212 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.470 nvme0n1 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.470 nvme0n1 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.470 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.729 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.730 nvme0n1 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.730 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.989 nvme0n1 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:11.989 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.990 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.249 nvme0n1 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.249 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.508 nvme0n1 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:12.508 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.509 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.768 nvme0n1 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.768 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.027 nvme0n1 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.027 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.027 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.286 nvme0n1 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:13.286 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.287 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.546 nvme0n1 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:13.546 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.805 nvme0n1 00:25:13.805 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.064 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.323 nvme0n1 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:14.323 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.324 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.583 nvme0n1 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.583 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.842 nvme0n1 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.842 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.843 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.410 nvme0n1 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.410 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.411 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.669 nvme0n1 00:25:15.669 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.928 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.928 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.928 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.929 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.188 nvme0n1 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.188 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.447 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.707 nvme0n1 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.707 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.275 nvme0n1 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.275 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.276 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.276 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.276 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.276 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.276 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.843 nvme0n1 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.843 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.844 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.412 nvme0n1 00:25:18.412 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.412 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.412 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.412 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.413 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.980 nvme0n1 00:25:18.980 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.980 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.980 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.980 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.980 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.980 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.239 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.806 nvme0n1 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.806 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.373 nvme0n1 00:25:20.373 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.374 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.633 nvme0n1 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.633 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.634 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.893 nvme0n1 00:25:20.893 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.893 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.893 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.893 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.893 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.893 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.894 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.153 nvme0n1 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.153 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.153 nvme0n1 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.153 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.412 nvme0n1 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.412 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.670 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.670 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.670 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.670 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.671 nvme0n1 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.671 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.930 nvme0n1 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.930 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.189 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.189 nvme0n1 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.189 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.448 nvme0n1 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.448 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.449 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.708 nvme0n1 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.708 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.967 nvme0n1 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.967 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.226 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.226 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.226 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.226 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.485 nvme0n1 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.485 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.744 nvme0n1 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.744 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.003 nvme0n1 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.003 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.003 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.262 nvme0n1 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.262 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.521 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.780 nvme0n1 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.780 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.781 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.349 nvme0n1 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.349 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.608 nvme0n1 00:25:25.608 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.608 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.608 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.608 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.608 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.608 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.866 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.867 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.125 nvme0n1 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.125 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.383 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.642 nvme0n1 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:26.642 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.643 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.210 nvme0n1 00:25:27.210 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.210 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.210 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.210 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.210 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.210 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.469 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.036 nvme0n1 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.036 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.604 nvme0n1 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.604 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.605 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.172 nvme0n1 00:25:29.172 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.172 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.172 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.172 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.172 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.172 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.430 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.998 nvme0n1 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.998 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.257 nvme0n1 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.257 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.258 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.258 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.258 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.258 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.258 nvme0n1 00:25:30.258 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.258 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.258 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.258 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.258 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.516 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.517 nvme0n1 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.517 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.775 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.775 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.775 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.775 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.775 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.775 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.775 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:30.775 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.775 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.776 nvme0n1 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.776 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.035 nvme0n1 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.035 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.035 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.035 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.035 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.035 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.035 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.035 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.035 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.036 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.294 nvme0n1 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.294 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.552 nvme0n1 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.552 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.810 nvme0n1 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.810 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.811 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.069 nvme0n1 00:25:32.069 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.069 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.069 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.069 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.069 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.069 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.070 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.070 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.070 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.070 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.070 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.070 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.070 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.328 nvme0n1 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.328 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.587 nvme0n1 00:25:32.587 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.587 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.587 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.587 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.587 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.587 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.845 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.846 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.115 nvme0n1 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.115 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.401 nvme0n1 00:25:33.401 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.401 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.401 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.401 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.401 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.401 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.401 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.401 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.401 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.402 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.703 nvme0n1 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.703 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.964 nvme0n1 00:25:33.964 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.964 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.964 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.965 10:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.532 nvme0n1 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.532 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.533 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.791 nvme0n1 00:25:34.791 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.791 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.791 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.791 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.791 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.791 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.050 10:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.309 nvme0n1 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.309 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.310 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.310 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.310 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.877 nvme0n1 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.877 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.878 10:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.137 nvme0n1 00:25:36.137 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.137 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.137 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.137 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.137 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNmYmQzZDhjOTJhZDlmNjhhOTMxNWE5MmRmYTE3YjjtLrcH: 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: ]] 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA3MmNmYTEwNzhmOTYwMmI1MmMzOTdiZTMwZTFjODZlOTkzMzJkYzdiYmYxYWJjYjI2MGM1ZWFiMzIyYTZlZbi5bLQ=: 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.396 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.965 nvme0n1 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.965 10:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.532 nvme0n1 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.532 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.791 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.792 10:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.360 nvme0n1 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTQ3ODg1NThlODM3OGIzOTBjZDU2MzkxYzQwZTA5MjVhZjhmMTAzMWUzMzg1ZjkyzHJTZA==: 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: ]] 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNlNGQxNDcxNWMwMTIzNjg0OWE5ZmU2YTUwZjI2NmSIL/rk: 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.360 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.928 nvme0n1 00:25:38.928 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.928 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.928 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.928 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.928 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.928 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.928 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njg1YWZiMGE5YTY5YzNjZDIyN2FkYjViNjJkYjE5ODZhNWJlOTU3ZGVjZWNmNDdjOTczMzNiNzAwYTVhYmQ3M8GTeUQ=: 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.929 10:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.498 nvme0n1 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.498 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.758 request: 00:25:39.758 { 00:25:39.758 "name": "nvme0", 00:25:39.758 "trtype": "tcp", 00:25:39.758 "traddr": "10.0.0.1", 00:25:39.758 "adrfam": "ipv4", 00:25:39.758 "trsvcid": "4420", 00:25:39.758 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.758 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.758 "prchk_reftag": false, 00:25:39.758 "prchk_guard": false, 00:25:39.758 "hdgst": false, 00:25:39.758 "ddgst": false, 00:25:39.758 "allow_unrecognized_csi": false, 00:25:39.758 "method": "bdev_nvme_attach_controller", 00:25:39.758 "req_id": 1 00:25:39.758 } 00:25:39.758 Got JSON-RPC error response 00:25:39.758 response: 00:25:39.758 { 00:25:39.758 "code": -5, 00:25:39.758 "message": "Input/output error" 00:25:39.758 } 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.758 request: 00:25:39.758 { 00:25:39.758 "name": "nvme0", 00:25:39.758 "trtype": "tcp", 00:25:39.758 "traddr": "10.0.0.1", 00:25:39.758 "adrfam": "ipv4", 00:25:39.758 "trsvcid": "4420", 00:25:39.758 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.758 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.758 "prchk_reftag": false, 00:25:39.758 "prchk_guard": false, 00:25:39.758 "hdgst": false, 00:25:39.758 "ddgst": false, 00:25:39.758 "dhchap_key": "key2", 00:25:39.758 "allow_unrecognized_csi": false, 00:25:39.758 "method": "bdev_nvme_attach_controller", 00:25:39.758 "req_id": 1 00:25:39.758 } 00:25:39.758 Got JSON-RPC error response 00:25:39.758 response: 00:25:39.758 { 00:25:39.758 "code": -5, 00:25:39.758 "message": "Input/output error" 00:25:39.758 } 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.758 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.759 request: 00:25:39.759 { 00:25:39.759 "name": "nvme0", 00:25:39.759 "trtype": "tcp", 00:25:39.759 "traddr": "10.0.0.1", 00:25:39.759 "adrfam": "ipv4", 00:25:39.759 "trsvcid": "4420", 00:25:39.759 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.759 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.759 "prchk_reftag": false, 00:25:39.759 "prchk_guard": false, 00:25:39.759 "hdgst": false, 00:25:39.759 "ddgst": false, 00:25:39.759 "dhchap_key": "key1", 00:25:39.759 "dhchap_ctrlr_key": "ckey2", 00:25:39.759 "allow_unrecognized_csi": false, 00:25:39.759 "method": "bdev_nvme_attach_controller", 00:25:39.759 "req_id": 1 00:25:39.759 } 00:25:39.759 Got JSON-RPC error response 00:25:39.759 response: 00:25:39.759 { 00:25:39.759 "code": -5, 00:25:39.759 "message": "Input/output error" 00:25:39.759 } 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.759 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.018 nvme0n1 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.018 10:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.018 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.018 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.018 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.018 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:40.018 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.019 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.019 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.019 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.019 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.019 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.019 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.019 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.277 request: 00:25:40.277 { 00:25:40.277 "name": "nvme0", 00:25:40.277 "dhchap_key": "key1", 00:25:40.277 "dhchap_ctrlr_key": "ckey2", 00:25:40.277 "method": "bdev_nvme_set_keys", 00:25:40.277 "req_id": 1 00:25:40.277 } 00:25:40.277 Got JSON-RPC error response 00:25:40.277 response: 00:25:40.277 { 00:25:40.277 "code": -13, 00:25:40.277 "message": "Permission denied" 00:25:40.277 } 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:40.277 10:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:41.216 10:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.216 10:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.216 10:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:41.216 10:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.216 10:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.216 10:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:41.216 10:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliODcxYmMzYmU5N2EwY2YwYWNjMDQzNDkyOGU1ZjdjZDk4Y2JiMTExMzczMTY1GRpaDg==: 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: ]] 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZjYjllNTQ0MWQ5MzY0OTk4MjJlM2I1OTdjNzczMzQ0ZDMyZTNhMjZmNTRiN2FjRiCfAw==: 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.593 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.594 nvme0n1 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkYjA2Nzg5MjM4MTU0YjdlMTQwODUyNGRmN2JjMzmapWOK: 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: ]] 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTAwNWIwMDQ1NmYwNjVhNTI0MWNlYzBmYzMwN2ZiMGJWMOKH: 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.594 request: 00:25:42.594 { 00:25:42.594 "name": "nvme0", 00:25:42.594 "dhchap_key": "key2", 00:25:42.594 "dhchap_ctrlr_key": "ckey1", 00:25:42.594 "method": "bdev_nvme_set_keys", 00:25:42.594 "req_id": 1 00:25:42.594 } 00:25:42.594 Got JSON-RPC error response 00:25:42.594 response: 00:25:42.594 { 00:25:42.594 "code": -13, 00:25:42.594 "message": "Permission denied" 00:25:42.594 } 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:42.594 10:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:43.532 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.532 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:43.532 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.532 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.532 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.791 rmmod nvme_tcp 00:25:43.791 rmmod nvme_fabrics 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1645772 ']' 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1645772 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1645772 ']' 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1645772 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1645772 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1645772' 00:25:43.791 killing process with pid 1645772 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1645772 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1645772 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:43.791 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:44.051 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.051 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.051 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.051 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.051 10:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:45.960 10:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:49.253 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:49.253 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:49.822 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:49.822 10:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Umk /tmp/spdk.key-null.Qi0 /tmp/spdk.key-sha256.KWB /tmp/spdk.key-sha384.EGT /tmp/spdk.key-sha512.ZKG /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:49.822 10:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:52.359 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:52.359 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:52.359 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:52.359 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:52.359 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:52.359 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:52.619 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:52.619 00:25:52.619 real 0m53.655s 00:25:52.619 user 0m48.759s 00:25:52.619 sys 0m12.346s 00:25:52.619 10:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.619 10:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.619 ************************************ 00:25:52.619 END TEST nvmf_auth_host 00:25:52.619 ************************************ 00:25:52.619 10:39:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:52.619 10:39:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:52.619 10:39:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:52.619 10:39:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:52.619 10:39:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.619 ************************************ 00:25:52.619 START TEST nvmf_digest 00:25:52.619 ************************************ 00:25:52.619 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:52.879 * Looking for test storage... 00:25:52.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:52.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.879 --rc genhtml_branch_coverage=1 00:25:52.879 --rc genhtml_function_coverage=1 00:25:52.879 --rc genhtml_legend=1 00:25:52.879 --rc geninfo_all_blocks=1 00:25:52.879 --rc geninfo_unexecuted_blocks=1 00:25:52.879 00:25:52.879 ' 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:52.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.879 --rc genhtml_branch_coverage=1 00:25:52.879 --rc genhtml_function_coverage=1 00:25:52.879 --rc genhtml_legend=1 00:25:52.879 --rc geninfo_all_blocks=1 00:25:52.879 --rc geninfo_unexecuted_blocks=1 00:25:52.879 00:25:52.879 ' 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:52.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.879 --rc genhtml_branch_coverage=1 00:25:52.879 --rc genhtml_function_coverage=1 00:25:52.879 --rc genhtml_legend=1 00:25:52.879 --rc geninfo_all_blocks=1 00:25:52.879 --rc geninfo_unexecuted_blocks=1 00:25:52.879 00:25:52.879 ' 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:52.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.879 --rc genhtml_branch_coverage=1 00:25:52.879 --rc genhtml_function_coverage=1 00:25:52.879 --rc genhtml_legend=1 00:25:52.879 --rc geninfo_all_blocks=1 00:25:52.879 --rc geninfo_unexecuted_blocks=1 00:25:52.879 00:25:52.879 ' 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.879 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:52.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:52.880 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:59.454 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:59.454 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:59.454 Found net devices under 0000:af:00.0: cvl_0_0 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:59.454 Found net devices under 0000:af:00.1: cvl_0_1 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:59.454 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:59.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:25:59.455 00:25:59.455 --- 10.0.0.2 ping statistics --- 00:25:59.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.455 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:25:59.455 00:25:59.455 --- 10.0.0.1 ping statistics --- 00:25:59.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.455 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:59.455 ************************************ 00:25:59.455 START TEST nvmf_digest_clean 00:25:59.455 ************************************ 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1659987 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1659987 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1659987 ']' 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.455 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.455 [2024-12-12 10:39:32.872198] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:25:59.455 [2024-12-12 10:39:32.872238] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.455 [2024-12-12 10:39:32.950996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.455 [2024-12-12 10:39:32.991892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.455 [2024-12-12 10:39:32.991927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.455 [2024-12-12 10:39:32.991934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.455 [2024-12-12 10:39:32.991942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.455 [2024-12-12 10:39:32.991947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.455 [2024-12-12 10:39:32.992447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.714 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.973 null0 00:25:59.973 [2024-12-12 10:39:33.823660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.973 [2024-12-12 10:39:33.847856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1660047 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1660047 /var/tmp/bperf.sock 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1660047 ']' 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.974 10:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.974 [2024-12-12 10:39:33.899540] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:25:59.974 [2024-12-12 10:39:33.899589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660047 ] 00:25:59.974 [2024-12-12 10:39:33.972806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.233 [2024-12-12 10:39:34.014423] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.233 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.233 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:00.233 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:00.233 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:00.233 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:00.492 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.492 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.750 nvme0n1 00:26:00.750 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:00.751 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.751 Running I/O for 2 seconds... 00:26:03.063 25875.00 IOPS, 101.07 MiB/s [2024-12-12T09:39:37.086Z] 25303.00 IOPS, 98.84 MiB/s 00:26:03.063 Latency(us) 00:26:03.063 [2024-12-12T09:39:37.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.063 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:03.063 nvme0n1 : 2.01 25313.24 98.88 0.00 0.00 5051.19 2559.02 16352.79 00:26:03.063 [2024-12-12T09:39:37.086Z] =================================================================================================================== 00:26:03.064 [2024-12-12T09:39:37.087Z] Total : 25313.24 98.88 0.00 0.00 5051.19 2559.02 16352.79 00:26:03.064 { 00:26:03.064 "results": [ 00:26:03.064 { 00:26:03.064 "job": "nvme0n1", 00:26:03.064 "core_mask": "0x2", 00:26:03.064 "workload": "randread", 00:26:03.064 "status": "finished", 00:26:03.064 "queue_depth": 128, 00:26:03.064 "io_size": 4096, 00:26:03.064 "runtime": 2.00646, 00:26:03.064 "iops": 25313.23824048324, 00:26:03.064 "mibps": 98.87983687688765, 00:26:03.064 "io_failed": 0, 00:26:03.064 "io_timeout": 0, 00:26:03.064 "avg_latency_us": 5051.191700803495, 00:26:03.064 "min_latency_us": 2559.024761904762, 00:26:03.064 "max_latency_us": 16352.792380952382 00:26:03.064 } 00:26:03.064 ], 00:26:03.064 "core_count": 1 00:26:03.064 } 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:03.064 | select(.opcode=="crc32c") 00:26:03.064 | "\(.module_name) \(.executed)"' 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1660047 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1660047 ']' 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1660047 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1660047 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1660047' 00:26:03.064 killing process with pid 1660047 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1660047 00:26:03.064 Received shutdown signal, test time was about 2.000000 seconds 00:26:03.064 00:26:03.064 Latency(us) 00:26:03.064 [2024-12-12T09:39:37.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.064 [2024-12-12T09:39:37.087Z] =================================================================================================================== 00:26:03.064 [2024-12-12T09:39:37.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.064 10:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1660047 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1660688 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1660688 /var/tmp/bperf.sock 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1660688 ']' 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:03.323 [2024-12-12 10:39:37.167855] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:03.323 [2024-12-12 10:39:37.167903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660688 ] 00:26:03.323 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.323 Zero copy mechanism will not be used. 00:26:03.323 [2024-12-12 10:39:37.242811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.323 [2024-12-12 10:39:37.283305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:03.323 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:03.582 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.582 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.843 nvme0n1 00:26:04.101 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:04.101 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.101 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.101 Zero copy mechanism will not be used. 00:26:04.101 Running I/O for 2 seconds... 00:26:05.974 5643.00 IOPS, 705.38 MiB/s [2024-12-12T09:39:39.997Z] 5936.50 IOPS, 742.06 MiB/s 00:26:05.974 Latency(us) 00:26:05.974 [2024-12-12T09:39:39.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.974 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:05.974 nvme0n1 : 2.00 5936.77 742.10 0.00 0.00 2692.56 631.95 13232.03 00:26:05.974 [2024-12-12T09:39:39.997Z] =================================================================================================================== 00:26:05.974 [2024-12-12T09:39:39.997Z] Total : 5936.77 742.10 0.00 0.00 2692.56 631.95 13232.03 00:26:05.974 { 00:26:05.974 "results": [ 00:26:05.974 { 00:26:05.974 "job": "nvme0n1", 00:26:05.974 "core_mask": "0x2", 00:26:05.974 "workload": "randread", 00:26:05.974 "status": "finished", 00:26:05.974 "queue_depth": 16, 00:26:05.975 "io_size": 131072, 00:26:05.975 "runtime": 2.002604, 00:26:05.975 "iops": 5936.770325036802, 00:26:05.975 "mibps": 742.0962906296003, 00:26:05.975 "io_failed": 0, 00:26:05.975 "io_timeout": 0, 00:26:05.975 "avg_latency_us": 2692.561098093876, 00:26:05.975 "min_latency_us": 631.9542857142857, 00:26:05.975 "max_latency_us": 13232.030476190475 00:26:05.975 } 00:26:05.975 ], 00:26:05.975 "core_count": 1 00:26:05.975 } 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:06.233 | select(.opcode=="crc32c") 00:26:06.233 | "\(.module_name) \(.executed)"' 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1660688 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1660688 ']' 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1660688 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.233 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1660688 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1660688' 00:26:06.492 killing process with pid 1660688 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1660688 00:26:06.492 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.492 00:26:06.492 Latency(us) 00:26:06.492 [2024-12-12T09:39:40.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.492 [2024-12-12T09:39:40.515Z] =================================================================================================================== 00:26:06.492 [2024-12-12T09:39:40.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1660688 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:06.492 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1661147 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1661147 /var/tmp/bperf.sock 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1661147 ']' 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.493 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:06.493 [2024-12-12 10:39:40.477869] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:06.493 [2024-12-12 10:39:40.477915] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661147 ] 00:26:06.752 [2024-12-12 10:39:40.551398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.752 [2024-12-12 10:39:40.587555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.752 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.752 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:06.752 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:06.752 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:06.752 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:07.011 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.011 10:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.269 nvme0n1 00:26:07.269 10:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:07.269 10:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:07.528 Running I/O for 2 seconds... 00:26:09.400 27726.00 IOPS, 108.30 MiB/s [2024-12-12T09:39:43.423Z] 27803.00 IOPS, 108.61 MiB/s 00:26:09.400 Latency(us) 00:26:09.400 [2024-12-12T09:39:43.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.400 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:09.400 nvme0n1 : 2.01 27803.97 108.61 0.00 0.00 4597.07 3354.82 9362.29 00:26:09.400 [2024-12-12T09:39:43.423Z] =================================================================================================================== 00:26:09.400 [2024-12-12T09:39:43.423Z] Total : 27803.97 108.61 0.00 0.00 4597.07 3354.82 9362.29 00:26:09.400 { 00:26:09.400 "results": [ 00:26:09.400 { 00:26:09.400 "job": "nvme0n1", 00:26:09.400 "core_mask": "0x2", 00:26:09.400 "workload": "randwrite", 00:26:09.400 "status": "finished", 00:26:09.400 "queue_depth": 128, 00:26:09.400 "io_size": 4096, 00:26:09.400 "runtime": 2.005685, 00:26:09.400 "iops": 27803.967223168143, 00:26:09.400 "mibps": 108.60924696550056, 00:26:09.400 "io_failed": 0, 00:26:09.400 "io_timeout": 0, 00:26:09.400 "avg_latency_us": 4597.0656723075845, 00:26:09.400 "min_latency_us": 3354.8190476190475, 00:26:09.400 "max_latency_us": 9362.285714285714 00:26:09.400 } 00:26:09.400 ], 00:26:09.400 "core_count": 1 00:26:09.400 } 00:26:09.400 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:09.400 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:09.400 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:09.400 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:09.400 | select(.opcode=="crc32c") 00:26:09.400 | "\(.module_name) \(.executed)"' 00:26:09.400 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1661147 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1661147 ']' 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1661147 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661147 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661147' 00:26:09.659 killing process with pid 1661147 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1661147 00:26:09.659 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.659 00:26:09.659 Latency(us) 00:26:09.659 [2024-12-12T09:39:43.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.659 [2024-12-12T09:39:43.682Z] =================================================================================================================== 00:26:09.659 [2024-12-12T09:39:43.682Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.659 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1661147 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1661654 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1661654 /var/tmp/bperf.sock 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1661654 ']' 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.919 [2024-12-12 10:39:43.784312] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:09.919 [2024-12-12 10:39:43.784359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661654 ] 00:26:09.919 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.919 Zero copy mechanism will not be used. 00:26:09.919 [2024-12-12 10:39:43.858581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.919 [2024-12-12 10:39:43.899262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:09.919 10:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.486 10:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.486 10:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.486 nvme0n1 00:26:10.486 10:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.486 10:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.745 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.745 Zero copy mechanism will not be used. 00:26:10.745 Running I/O for 2 seconds... 00:26:12.616 6420.00 IOPS, 802.50 MiB/s [2024-12-12T09:39:46.639Z] 6534.00 IOPS, 816.75 MiB/s 00:26:12.616 Latency(us) 00:26:12.616 [2024-12-12T09:39:46.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.616 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:12.616 nvme0n1 : 2.00 6531.51 816.44 0.00 0.00 2445.60 1302.92 4306.65 00:26:12.616 [2024-12-12T09:39:46.639Z] =================================================================================================================== 00:26:12.616 [2024-12-12T09:39:46.639Z] Total : 6531.51 816.44 0.00 0.00 2445.60 1302.92 4306.65 00:26:12.616 { 00:26:12.616 "results": [ 00:26:12.616 { 00:26:12.616 "job": "nvme0n1", 00:26:12.616 "core_mask": "0x2", 00:26:12.616 "workload": "randwrite", 00:26:12.616 "status": "finished", 00:26:12.616 "queue_depth": 16, 00:26:12.616 "io_size": 131072, 00:26:12.616 "runtime": 2.003212, 00:26:12.616 "iops": 6531.510394306743, 00:26:12.616 "mibps": 816.4387992883429, 00:26:12.616 "io_failed": 0, 00:26:12.616 "io_timeout": 0, 00:26:12.616 "avg_latency_us": 2445.599890524232, 00:26:12.616 "min_latency_us": 1302.9180952380952, 00:26:12.616 "max_latency_us": 4306.651428571428 00:26:12.616 } 00:26:12.616 ], 00:26:12.616 "core_count": 1 00:26:12.616 } 00:26:12.616 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:12.616 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:12.616 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:12.616 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:12.616 | select(.opcode=="crc32c") 00:26:12.616 | "\(.module_name) \(.executed)"' 00:26:12.616 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1661654 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1661654 ']' 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1661654 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661654 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661654' 00:26:12.875 killing process with pid 1661654 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1661654 00:26:12.875 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.875 00:26:12.875 Latency(us) 00:26:12.875 [2024-12-12T09:39:46.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.875 [2024-12-12T09:39:46.898Z] =================================================================================================================== 00:26:12.875 [2024-12-12T09:39:46.898Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.875 10:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1661654 00:26:13.133 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1659987 00:26:13.133 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1659987 ']' 00:26:13.133 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1659987 00:26:13.133 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:13.133 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.133 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659987 00:26:13.133 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:13.133 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:13.134 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659987' 00:26:13.134 killing process with pid 1659987 00:26:13.134 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1659987 00:26:13.134 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1659987 00:26:13.392 00:26:13.392 real 0m14.427s 00:26:13.392 user 0m27.024s 00:26:13.392 sys 0m4.709s 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.392 ************************************ 00:26:13.392 END TEST nvmf_digest_clean 00:26:13.392 ************************************ 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:13.392 ************************************ 00:26:13.392 START TEST nvmf_digest_error 00:26:13.392 ************************************ 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1662308 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1662308 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1662308 ']' 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.392 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.392 [2024-12-12 10:39:47.367163] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:13.392 [2024-12-12 10:39:47.367202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.651 [2024-12-12 10:39:47.442499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.651 [2024-12-12 10:39:47.482014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.651 [2024-12-12 10:39:47.482048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.651 [2024-12-12 10:39:47.482055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.651 [2024-12-12 10:39:47.482061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.651 [2024-12-12 10:39:47.482066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.651 [2024-12-12 10:39:47.482562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.651 [2024-12-12 10:39:47.555016] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.651 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.651 null0 00:26:13.651 [2024-12-12 10:39:47.650119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.910 [2024-12-12 10:39:47.674320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1662330 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1662330 /var/tmp/bperf.sock 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1662330 ']' 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.910 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.910 [2024-12-12 10:39:47.728510] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:13.910 [2024-12-12 10:39:47.728552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662330 ] 00:26:13.910 [2024-12-12 10:39:47.805138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.910 [2024-12-12 10:39:47.845468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.169 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.169 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:14.169 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.169 10:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.169 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:14.169 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.169 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.169 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.169 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.169 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.428 nvme0n1 00:26:14.687 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:14.687 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.687 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.687 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.687 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:14.687 10:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.687 Running I/O for 2 seconds... 00:26:14.687 [2024-12-12 10:39:48.575981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.687 [2024-12-12 10:39:48.576013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.687 [2024-12-12 10:39:48.576023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.687 [2024-12-12 10:39:48.586955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.687 [2024-12-12 10:39:48.586981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.687 [2024-12-12 10:39:48.586990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.687 [2024-12-12 10:39:48.599784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.687 [2024-12-12 10:39:48.599806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.687 [2024-12-12 10:39:48.599825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.687 [2024-12-12 10:39:48.611164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.687 [2024-12-12 10:39:48.611186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.687 [2024-12-12 10:39:48.611198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.687 [2024-12-12 10:39:48.623191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.687 [2024-12-12 10:39:48.623212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.687 [2024-12-12 10:39:48.623221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.687 [2024-12-12 10:39:48.634822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.687 [2024-12-12 10:39:48.634843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.687 [2024-12-12 10:39:48.634851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.687 [2024-12-12 10:39:48.647139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.687 [2024-12-12 10:39:48.647160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.687 [2024-12-12 10:39:48.647168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.687 [2024-12-12 10:39:48.660430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.687 [2024-12-12 10:39:48.660451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.687 [2024-12-12 10:39:48.660460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.687 [2024-12-12 10:39:48.668386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.688 [2024-12-12 10:39:48.668406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.688 [2024-12-12 10:39:48.668415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.688 [2024-12-12 10:39:48.680017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.688 [2024-12-12 10:39:48.680039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.688 [2024-12-12 10:39:48.680047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.688 [2024-12-12 10:39:48.690067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.688 [2024-12-12 10:39:48.690088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.688 [2024-12-12 10:39:48.690097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.688 [2024-12-12 10:39:48.699238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.688 [2024-12-12 10:39:48.699259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.688 [2024-12-12 10:39:48.699268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.688 [2024-12-12 10:39:48.708181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.688 [2024-12-12 10:39:48.708206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.688 [2024-12-12 10:39:48.708214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.717743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.717766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.717775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.727813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.727835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.727844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.736669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.736690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.736699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.747727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.747748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.747757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.760339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.760360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.760370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.771463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.771483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.771491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.780166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.780187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.780195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.789775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.789796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.789807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.802253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.802275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.802283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.813479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.813500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.813509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.823620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.823641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.823649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.832456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.832477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.832485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.842505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.842526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.842539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.852339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.852361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.852369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.862110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.862131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.862139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.870603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.870624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.870632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.879048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.879069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.879080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.889331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.889352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.889360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.898532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.898553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.898561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.907699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.907720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.907728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.916949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.916970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.916979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.927161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.927183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.927191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.937101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.937123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.937131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.945839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.945860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.945868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.956470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.956491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.956499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.948 [2024-12-12 10:39:48.966456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:14.948 [2024-12-12 10:39:48.966479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.948 [2024-12-12 10:39:48.966487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:48.975431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:48.975454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:48.975462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:48.985780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:48.985802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:48.985809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:48.994748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:48.994769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:48.994778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.005951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.005972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:49.005981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.017979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.018001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:49.018009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.028483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.028504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:49.028512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.037032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.037052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:49.037060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.047395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.047416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:49.047428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.058187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.058209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:49.058217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.066415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.066436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:49.066444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.077482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.077504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:49.077512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.086763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.086785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.208 [2024-12-12 10:39:49.086793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.208 [2024-12-12 10:39:49.095174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.208 [2024-12-12 10:39:49.095195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.095203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.105796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.105817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.105825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.115017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.115038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.115046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.123945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.123965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.123973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.133695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.133720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.133728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.142636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.142656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.142664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.151922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.151943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.151951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.160838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.160858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.160867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.169116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.169137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.169145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.178355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.178375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.178384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.187799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.187820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.187829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.196866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.196887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.196896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.205352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.205372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.205381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.216393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.216413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.216421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.209 [2024-12-12 10:39:49.228107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.209 [2024-12-12 10:39:49.228127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.209 [2024-12-12 10:39:49.228136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.468 [2024-12-12 10:39:49.237783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.468 [2024-12-12 10:39:49.237803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.468 [2024-12-12 10:39:49.237811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.468 [2024-12-12 10:39:49.246486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.468 [2024-12-12 10:39:49.246506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.468 [2024-12-12 10:39:49.246514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.468 [2024-12-12 10:39:49.255870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.468 [2024-12-12 10:39:49.255891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.255899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.265238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.265260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.265268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.274356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.274377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.274384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.283097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.283119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.283127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.293394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.293416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.293428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.302564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.302591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.302599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.310841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.310862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.310870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.320342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.320364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.320372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.330716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.330739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.330747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.338799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.338820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.338828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.349390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.349412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.349420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.357527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.357548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.357556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.367826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.367848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.367857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.377384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.377409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.377418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.388170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.388192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.388201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.397656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.397677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.397685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.406235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.406257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.406265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.415796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.415817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.415826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.425272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.425295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.425304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.433974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.433995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.434003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.444371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.444392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.444400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.453385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.453406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.453417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.464741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.464762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.464770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.473714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.473735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.473743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.469 [2024-12-12 10:39:49.481549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.469 [2024-12-12 10:39:49.481578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.469 [2024-12-12 10:39:49.481587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.492456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.492478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.492486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.502280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.502301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.502310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.510650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.510672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.510680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.520177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.520199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.520207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.529322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.529343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.529352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.538742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.538768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.538776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.547720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.547741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.547749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.557546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.557574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.557583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 25788.00 IOPS, 100.73 MiB/s [2024-12-12T09:39:49.752Z] [2024-12-12 10:39:49.566847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.566869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.566877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.577815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.577837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.577845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.586211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.586233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.586241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.596209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.596231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.596240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.606228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.606249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.606258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.616212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.616234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.616242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.625392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.625413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.625421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.633470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.633491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.633500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.645333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.645354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.645362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.656583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.656605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.656613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.666143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.666165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.666173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.674473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.729 [2024-12-12 10:39:49.674494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.729 [2024-12-12 10:39:49.674502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.729 [2024-12-12 10:39:49.683921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.730 [2024-12-12 10:39:49.683943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.730 [2024-12-12 10:39:49.683951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.730 [2024-12-12 10:39:49.693136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.730 [2024-12-12 10:39:49.693158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.730 [2024-12-12 10:39:49.693167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.730 [2024-12-12 10:39:49.701924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.730 [2024-12-12 10:39:49.701945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.730 [2024-12-12 10:39:49.701957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.730 [2024-12-12 10:39:49.711099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.730 [2024-12-12 10:39:49.711120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.730 [2024-12-12 10:39:49.711128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.730 [2024-12-12 10:39:49.720485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.730 [2024-12-12 10:39:49.720505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.730 [2024-12-12 10:39:49.720514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.730 [2024-12-12 10:39:49.729309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.730 [2024-12-12 10:39:49.729329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.730 [2024-12-12 10:39:49.729338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.730 [2024-12-12 10:39:49.739025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.730 [2024-12-12 10:39:49.739047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.730 [2024-12-12 10:39:49.739055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.730 [2024-12-12 10:39:49.749962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.730 [2024-12-12 10:39:49.749984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.730 [2024-12-12 10:39:49.749992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.989 [2024-12-12 10:39:49.758474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.989 [2024-12-12 10:39:49.758494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.989 [2024-12-12 10:39:49.758503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.989 [2024-12-12 10:39:49.767763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.989 [2024-12-12 10:39:49.767784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.989 [2024-12-12 10:39:49.767792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.989 [2024-12-12 10:39:49.778201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.989 [2024-12-12 10:39:49.778222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.989 [2024-12-12 10:39:49.778230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.989 [2024-12-12 10:39:49.786821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.989 [2024-12-12 10:39:49.786841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.989 [2024-12-12 10:39:49.786849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.989 [2024-12-12 10:39:49.796825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.989 [2024-12-12 10:39:49.796847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.989 [2024-12-12 10:39:49.796855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.989 [2024-12-12 10:39:49.806692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.989 [2024-12-12 10:39:49.806714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.989 [2024-12-12 10:39:49.806723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.989 [2024-12-12 10:39:49.815457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.989 [2024-12-12 10:39:49.815478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.989 [2024-12-12 10:39:49.815486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.989 [2024-12-12 10:39:49.825934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.989 [2024-12-12 10:39:49.825955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.825963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.833905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.833926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.833933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.843872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.843894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.843902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.852993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.853014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.853022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.863356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.863376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.863388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.872828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.872850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.872858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.881347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.881368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.881377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.890078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.890099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.890106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.900111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.900132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.900141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.910806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.910827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.910835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.920172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.920192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.920201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.930174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.930194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.930202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.938467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.938488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.938496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.949090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.949115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.949123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.959200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.959221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.959229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.966845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.966866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.966874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.976430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.976452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.976460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.986279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.986301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.986309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:49.995429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:49.995450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:49.995458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.990 [2024-12-12 10:39:50.004685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:15.990 [2024-12-12 10:39:50.004709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.990 [2024-12-12 10:39:50.004718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.013805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.013827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.013835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.024535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.024558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.024566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.034794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.034824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.034832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.044927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.044953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.044963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.053599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.053621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.053631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.063922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.063945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.063954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.074525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.074548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.074557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.083659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.083681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.083690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.093772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.093794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.093803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.104594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.104615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.104624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.112916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.112938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.112953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.124500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.124522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.124530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.135429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.135450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.135458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.143744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.143765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.143773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.155182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.155202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.155210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.166771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.166792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.166800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.177622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.177644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.177652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.189586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.189610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.189618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.201966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.201987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.201996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.209840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.209861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.209869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.221648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.221670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.221678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.233868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.233890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.233898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.246472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.246493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.246501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.250 [2024-12-12 10:39:50.254800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.250 [2024-12-12 10:39:50.254820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.250 [2024-12-12 10:39:50.254828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.251 [2024-12-12 10:39:50.265721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.251 [2024-12-12 10:39:50.265743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.251 [2024-12-12 10:39:50.265751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.277225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.277246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.277254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.291063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.291085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.291093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.301744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.301765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.301777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.310790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.310810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.310819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.323198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.323218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.323227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.334607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.334629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.334637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.348494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.348514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.348522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.356446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.356466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.356475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.368582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.368604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.368612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.379830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.379851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.379859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.389306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.389326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.389335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.401533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.401557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.401566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.413889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.413910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.413918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.425260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.425281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.425290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.434345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.434366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.434374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.446128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.446148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.446157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.457438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.457459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.457467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.466279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.466299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.466308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.479323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.479344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.479353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.489354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.489375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.489384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.500474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.510 [2024-12-12 10:39:50.500494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.510 [2024-12-12 10:39:50.500502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.510 [2024-12-12 10:39:50.512079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.511 [2024-12-12 10:39:50.512100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.511 [2024-12-12 10:39:50.512108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.511 [2024-12-12 10:39:50.522247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.511 [2024-12-12 10:39:50.522268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.511 [2024-12-12 10:39:50.522276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.770 [2024-12-12 10:39:50.534080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.770 [2024-12-12 10:39:50.534100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.770 [2024-12-12 10:39:50.534108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.770 [2024-12-12 10:39:50.542896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.770 [2024-12-12 10:39:50.542916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.770 [2024-12-12 10:39:50.542924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.770 [2024-12-12 10:39:50.554593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.770 [2024-12-12 10:39:50.554613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.770 [2024-12-12 10:39:50.554621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.770 25436.50 IOPS, 99.36 MiB/s [2024-12-12T09:39:50.793Z] [2024-12-12 10:39:50.564689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23ca9a0) 00:26:16.770 [2024-12-12 10:39:50.564710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.770 [2024-12-12 10:39:50.564718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.770 00:26:16.770 Latency(us) 00:26:16.770 [2024-12-12T09:39:50.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.770 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:16.770 nvme0n1 : 2.00 25449.31 99.41 0.00 0.00 5023.42 2715.06 16602.45 00:26:16.770 [2024-12-12T09:39:50.793Z] =================================================================================================================== 00:26:16.770 [2024-12-12T09:39:50.793Z] Total : 25449.31 99.41 0.00 0.00 5023.42 2715.06 16602.45 00:26:16.770 { 00:26:16.770 "results": [ 00:26:16.770 { 00:26:16.770 "job": "nvme0n1", 00:26:16.770 "core_mask": "0x2", 00:26:16.770 "workload": "randread", 00:26:16.770 "status": "finished", 00:26:16.770 "queue_depth": 128, 00:26:16.770 "io_size": 4096, 00:26:16.770 "runtime": 2.004023, 00:26:16.770 "iops": 25449.308715518735, 00:26:16.770 "mibps": 99.41136216999506, 00:26:16.770 "io_failed": 0, 00:26:16.770 "io_timeout": 0, 00:26:16.770 "avg_latency_us": 5023.416416298093, 00:26:16.770 "min_latency_us": 2715.062857142857, 00:26:16.770 "max_latency_us": 16602.453333333335 00:26:16.770 } 00:26:16.770 ], 00:26:16.770 "core_count": 1 00:26:16.770 } 00:26:16.770 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:16.770 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:16.770 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:16.770 | .driver_specific 00:26:16.770 | .nvme_error 00:26:16.770 | .status_code 00:26:16.770 | .command_transient_transport_error' 00:26:16.770 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:16.770 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:26:16.770 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1662330 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1662330 ']' 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1662330 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662330 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662330' 00:26:17.030 killing process with pid 1662330 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1662330 00:26:17.030 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.030 00:26:17.030 Latency(us) 00:26:17.030 [2024-12-12T09:39:51.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.030 [2024-12-12T09:39:51.053Z] =================================================================================================================== 00:26:17.030 [2024-12-12T09:39:51.053Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.030 10:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1662330 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1662952 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1662952 /var/tmp/bperf.sock 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1662952 ']' 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.030 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.289 [2024-12-12 10:39:51.053084] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:17.289 [2024-12-12 10:39:51.053134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662952 ] 00:26:17.289 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.289 Zero copy mechanism will not be used. 00:26:17.289 [2024-12-12 10:39:51.127062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.289 [2024-12-12 10:39:51.164774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.289 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.289 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:17.289 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.289 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.572 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.572 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.572 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.572 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.572 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.572 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.851 nvme0n1 00:26:17.851 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:17.851 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.851 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.851 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.851 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:17.851 10:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.116 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.116 Zero copy mechanism will not be used. 00:26:18.116 Running I/O for 2 seconds... 00:26:18.116 [2024-12-12 10:39:51.888010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.888046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.888061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.894446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.894473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.894483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.900795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.900822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.900830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.906964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.906987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.906996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.912978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.912999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.913008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.919193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.919216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.919224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.924656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.924678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.924687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.930297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.930319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.930326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.936008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.936029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.936037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.941567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.941596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.941604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.946984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.947007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.947015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.952376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.952398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.952405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.957890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.957912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.957920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.963478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.963500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.963509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.968982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.969003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.969011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.974590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.974612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.974620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.979940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.979962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.116 [2024-12-12 10:39:51.979970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.116 [2024-12-12 10:39:51.985150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.116 [2024-12-12 10:39:51.985173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:51.985185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:51.990656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:51.990678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:51.990686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:51.995984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:51.996006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:51.996014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.001209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.001232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.001240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.006360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.006382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.006390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.011649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.011671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.011679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.016910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.016932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.016940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.022121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.022142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.022150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.027338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.027359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.027367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.032477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.032501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.032509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.037637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.037658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.037667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.042846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.042868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.042877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.048005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.048027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.048036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.053223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.053247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.053257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.058474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.058496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.058504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.063703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.063725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.063734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.069006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.069028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.117 [2024-12-12 10:39:52.069036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.117 [2024-12-12 10:39:52.074300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.117 [2024-12-12 10:39:52.074322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.074330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.079645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.079667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.079675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.085090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.085112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.085120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.090440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.090461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.090468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.095737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.095759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.095768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.101032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.101055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.101063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.106347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.106369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.106378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.111566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.111597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.111605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.116895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.116917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.116925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.122229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.122252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.122264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.127747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.127769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.127777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.118 [2024-12-12 10:39:52.133395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.118 [2024-12-12 10:39:52.133419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.118 [2024-12-12 10:39:52.133428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.139220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.139243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.139251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.144985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.145008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.145017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.150680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.150702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.150710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.156205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.156227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.156235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.161097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.161119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.161128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.164321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.164343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.164351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.169663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.169688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.169696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.174876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.174910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.174918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.179915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.179937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.179944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.185369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.185391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.185400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.190796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.190819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.190827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.196142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.196164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.196172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.201521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.201543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.201551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.206872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.206894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.206902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.211831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.211853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.211861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.217112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.217133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.217141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.222230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.222252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.222260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.227368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.227389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.227397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.232983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.233004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.233012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.238848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.238870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.238879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.245045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.245067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.245075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.252272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.252295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.252303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.260141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.379 [2024-12-12 10:39:52.260164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.379 [2024-12-12 10:39:52.260173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.379 [2024-12-12 10:39:52.267366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.267389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.267401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.274853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.274875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.274884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.282339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.282361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.282370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.290585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.290608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.290617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.297587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.297609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.297618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.304195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.304217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.304226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.310928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.310951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.310959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.317397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.317419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.317427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.324516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.324539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.324547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.331590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.331617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.331627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.339367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.339391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.339400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.347208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.347232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.347241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.355601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.355624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.355633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.363076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.363099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.363107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.370498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.370520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.370529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.377943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.377964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.377973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.385544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.385567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.385583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.380 [2024-12-12 10:39:52.393972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.380 [2024-12-12 10:39:52.393995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.380 [2024-12-12 10:39:52.394003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.401592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.401616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.401625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.409384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.409407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.409415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.416730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.416752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.416760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.423382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.423404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.423412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.430022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.430044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.430052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.436507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.436529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.436538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.442489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.442509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.442518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.448485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.448506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.448515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.454254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.454275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.454288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.460049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.460071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.460080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.465723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.465745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.465753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.471312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.471334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.471343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.476949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.476970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.476978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.482499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.482521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.482529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.488284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.488306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.488314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.493862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.493884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.493892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.640 [2024-12-12 10:39:52.499339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.640 [2024-12-12 10:39:52.499361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.640 [2024-12-12 10:39:52.499369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.504958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.504979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.504987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.510532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.510554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.510562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.516321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.516343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.516351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.522085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.522107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.522115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.527515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.527536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.527544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.533240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.533262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.533270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.538918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.538939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.538946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.544482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.544504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.544512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.550300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.550322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.550334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.556739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.556760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.556768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.563017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.563037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.563045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.569225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.569247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.569255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.575234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.575256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.575264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.581244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.581266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.581274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.587261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.587281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.587290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.593175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.593197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.593205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.599034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.599057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.599065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.604704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.604731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.604739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.610315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.610338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.610346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.615969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.615992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.616000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.622102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.622125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.622133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.629476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.629500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.629508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.637455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.637478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.637486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.644803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.644827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.644836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.652409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.652432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.652441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.641 [2024-12-12 10:39:52.656873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.641 [2024-12-12 10:39:52.656895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.641 [2024-12-12 10:39:52.656904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.664320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.664342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.901 [2024-12-12 10:39:52.664351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.672852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.672875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.901 [2024-12-12 10:39:52.672883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.681043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.681066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.901 [2024-12-12 10:39:52.681074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.688661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.688684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.901 [2024-12-12 10:39:52.688693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.695708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.695730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.901 [2024-12-12 10:39:52.695739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.703382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.703405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.901 [2024-12-12 10:39:52.703413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.710578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.710601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.901 [2024-12-12 10:39:52.710609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.717235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.717257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.901 [2024-12-12 10:39:52.717266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.723493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.723515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.901 [2024-12-12 10:39:52.723526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.901 [2024-12-12 10:39:52.729858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.901 [2024-12-12 10:39:52.729881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.729889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.737455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.737477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.737486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.745533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.745556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.745564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.751690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.751713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.751722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.758732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.758755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.758763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.765860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.765883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.765891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.773587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.773609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.773617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.781209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.781231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.781240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.788079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.788106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.788114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.795505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.795528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.795536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.803909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.803932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.803941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.810953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.810975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.810983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.817904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.817926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.817934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.824589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.824611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.824619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.831033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.831054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.831063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.838717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.838739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.838747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.846257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.846280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.846288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.853591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.853613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.853622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.860966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.860988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.860996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.868278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.868299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.868308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.875747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.875768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.875777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.902 4980.00 IOPS, 622.50 MiB/s [2024-12-12T09:39:52.925Z] [2024-12-12 10:39:52.884425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.884448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.884456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.891845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.891867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.891876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.899322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.899345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.899353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.906464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.906487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.906496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.914153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.914175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.914190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:18.902 [2024-12-12 10:39:52.922115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:18.902 [2024-12-12 10:39:52.922137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.902 [2024-12-12 10:39:52.922145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.162 [2024-12-12 10:39:52.930168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.162 [2024-12-12 10:39:52.930189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.162 [2024-12-12 10:39:52.930197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.162 [2024-12-12 10:39:52.938070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.162 [2024-12-12 10:39:52.938092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.162 [2024-12-12 10:39:52.938100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.162 [2024-12-12 10:39:52.945914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.162 [2024-12-12 10:39:52.945936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.162 [2024-12-12 10:39:52.945944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.162 [2024-12-12 10:39:52.952348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.162 [2024-12-12 10:39:52.952371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.162 [2024-12-12 10:39:52.952379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.162 [2024-12-12 10:39:52.958961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.162 [2024-12-12 10:39:52.958984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.162 [2024-12-12 10:39:52.958992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:52.964875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:52.964897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:52.964905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:52.970646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:52.970668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:52.970675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:52.976667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:52.976689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:52.976697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:52.982558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:52.982584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:52.982592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:52.988475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:52.988497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:52.988505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:52.994282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:52.994303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:52.994311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.000082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.000103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.000112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.005633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.005654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.005662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.011067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.011088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.011096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.016750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.016771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.016779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.022221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.022243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.022254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.027904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.027925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.027933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.031109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.031130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.031139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.036521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.036543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.036551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.041939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.041960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.041968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.047120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.047142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.047150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.052331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.052352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.052360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.057516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.057537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.057545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.062743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.062765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.062773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.068082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.068107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.068117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.073286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.073306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.073314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.078513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.078535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.078543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.083783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.083805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.083814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.089070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.089091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.089099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.094542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.094563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.094576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.099925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.099945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.099953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.105657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.105678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.163 [2024-12-12 10:39:53.105686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.163 [2024-12-12 10:39:53.111009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.163 [2024-12-12 10:39:53.111030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.111038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.116319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.116339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.116347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.121755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.121776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.121784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.127255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.127276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.127284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.132534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.132555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.132562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.137856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.137877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.137885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.143531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.143552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.143560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.149369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.149389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.149397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.155771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.155791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.155799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.161979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.161999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.162011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.167992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.168013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.168021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.173760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.173780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.173788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.164 [2024-12-12 10:39:53.179718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.164 [2024-12-12 10:39:53.179739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.164 [2024-12-12 10:39:53.179747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.186045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.186067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.186076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.192284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.192304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.192311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.198268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.198288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.198296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.204047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.204068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.204076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.209790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.209810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.209818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.215566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.215597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.215605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.221190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.221212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.221220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.226740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.226761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.226769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.232238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.232259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.232267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.237751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.237772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.237780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.243343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.243364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.243372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.248734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.248755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.248764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.254002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.254023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.254031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.259258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.259279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.259287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.264525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.264545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.264553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.269825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.269845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.269852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.424 [2024-12-12 10:39:53.275386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.424 [2024-12-12 10:39:53.275407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.424 [2024-12-12 10:39:53.275415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.281236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.281258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.281266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.287214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.287236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.287245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.294262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.294284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.294292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.301681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.301703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.301711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.309680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.309702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.309711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.316899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.316921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.316932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.324315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.324337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.324345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.331724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.331746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.331754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.339157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.339179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.339187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.347079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.347101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.347110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.355222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.355245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.355254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.362605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.362627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.362635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.370284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.370306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.370315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.377823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.377845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.377854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.384808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.384833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.384842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.391621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.391643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.391652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.398522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.398544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.398552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.405917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.405940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.405948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.413680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.413702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.413711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.421185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.421207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.421216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.428040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.428064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.428072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.435163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.435186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.435195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.425 [2024-12-12 10:39:53.442717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.425 [2024-12-12 10:39:53.442741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.425 [2024-12-12 10:39:53.442749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.685 [2024-12-12 10:39:53.450615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.685 [2024-12-12 10:39:53.450639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.685 [2024-12-12 10:39:53.450647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.685 [2024-12-12 10:39:53.459322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.685 [2024-12-12 10:39:53.459345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.685 [2024-12-12 10:39:53.459354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.685 [2024-12-12 10:39:53.467466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.685 [2024-12-12 10:39:53.467489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.685 [2024-12-12 10:39:53.467497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.685 [2024-12-12 10:39:53.475701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.685 [2024-12-12 10:39:53.475724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.685 [2024-12-12 10:39:53.475733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.685 [2024-12-12 10:39:53.483187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.685 [2024-12-12 10:39:53.483211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.685 [2024-12-12 10:39:53.483220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.685 [2024-12-12 10:39:53.490813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.685 [2024-12-12 10:39:53.490835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.685 [2024-12-12 10:39:53.490843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.685 [2024-12-12 10:39:53.498323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.685 [2024-12-12 10:39:53.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.498355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.506276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.506300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.506308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.513519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.513543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.513555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.520178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.520202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.520210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.526848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.526873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.526882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.533281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.533305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.533314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.539224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.539247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.539256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.546624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.546648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.546656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.553286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.553309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.553318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.561483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.561506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.561514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.568349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.568372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.568380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.574934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.574961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.574969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.582064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.582088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.582096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.588864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.588887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.588894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.595972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.595995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.596004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.604228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.604252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.604260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.611288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.611312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.611321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.618398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.618420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.618429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.625413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.625435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.625444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.633578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.633601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.633610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.641024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.641047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.641056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.647918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.647942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.647951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.655002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.655025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.655034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.662180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.662204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.662212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.669749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.669773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.669781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.677873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.677897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.677905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.685264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.685287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.685295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.693278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.686 [2024-12-12 10:39:53.693300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.686 [2024-12-12 10:39:53.693308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.686 [2024-12-12 10:39:53.700701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.687 [2024-12-12 10:39:53.700725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.687 [2024-12-12 10:39:53.700737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.946 [2024-12-12 10:39:53.708438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.946 [2024-12-12 10:39:53.708463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.946 [2024-12-12 10:39:53.708471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.946 [2024-12-12 10:39:53.716298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.716322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.716330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.723232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.723255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.723264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.730012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.730035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.730043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.736221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.736244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.736252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.742553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.742583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.742592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.749055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.749079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.749088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.755411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.755434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.755442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.761658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.761681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.761690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.769896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.769919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.769928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.776876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.776898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.776906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.783978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.784001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.784009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.790263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.790284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.790293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.795536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.795558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.795567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.798683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.798703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.798711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.804132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.804153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.804161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.809482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.809503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.809530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.814826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.814848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.814856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.820010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.820031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.820039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.825141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.825162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.825170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.830303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.830324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.830332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.835431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.835452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.835460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.840600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.840620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.840628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.845796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.845816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.845824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.851013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.851034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.851043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.856145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.856170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.856178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.861331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.861353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.861361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.866528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.866549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.947 [2024-12-12 10:39:53.866557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.947 [2024-12-12 10:39:53.871702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.947 [2024-12-12 10:39:53.871722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.948 [2024-12-12 10:39:53.871730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.948 [2024-12-12 10:39:53.876897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.948 [2024-12-12 10:39:53.876918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.948 [2024-12-12 10:39:53.876926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.948 [2024-12-12 10:39:53.882106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x705560) 00:26:19.948 [2024-12-12 10:39:53.882126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.948 [2024-12-12 10:39:53.882134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.948 4913.00 IOPS, 614.12 MiB/s 00:26:19.948 Latency(us) 00:26:19.948 [2024-12-12T09:39:53.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.948 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:19.948 nvme0n1 : 2.00 4913.77 614.22 0.00 0.00 3253.67 624.15 9175.04 00:26:19.948 [2024-12-12T09:39:53.971Z] =================================================================================================================== 00:26:19.948 [2024-12-12T09:39:53.971Z] Total : 4913.77 614.22 0.00 0.00 3253.67 624.15 9175.04 00:26:19.948 { 00:26:19.948 "results": [ 00:26:19.948 { 00:26:19.948 "job": "nvme0n1", 00:26:19.948 "core_mask": "0x2", 00:26:19.948 "workload": "randread", 00:26:19.948 "status": "finished", 00:26:19.948 "queue_depth": 16, 00:26:19.948 "io_size": 131072, 00:26:19.948 "runtime": 2.002944, 00:26:19.948 "iops": 4913.766935071575, 00:26:19.948 "mibps": 614.2208668839469, 00:26:19.948 "io_failed": 0, 00:26:19.948 "io_timeout": 0, 00:26:19.948 "avg_latency_us": 3253.6707452027754, 00:26:19.948 "min_latency_us": 624.152380952381, 00:26:19.948 "max_latency_us": 9175.04 00:26:19.948 } 00:26:19.948 ], 00:26:19.948 "core_count": 1 00:26:19.948 } 00:26:19.948 10:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:19.948 10:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:19.948 10:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:19.948 | .driver_specific 00:26:19.948 | .nvme_error 00:26:19.948 | .status_code 00:26:19.948 | .command_transient_transport_error' 00:26:19.948 10:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 317 > 0 )) 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1662952 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1662952 ']' 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1662952 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662952 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662952' 00:26:20.207 killing process with pid 1662952 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1662952 00:26:20.207 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.207 00:26:20.207 Latency(us) 00:26:20.207 [2024-12-12T09:39:54.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.207 [2024-12-12T09:39:54.230Z] =================================================================================================================== 00:26:20.207 [2024-12-12T09:39:54.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.207 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1662952 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1663472 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1663472 /var/tmp/bperf.sock 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1663472 ']' 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.466 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.466 [2024-12-12 10:39:54.363402] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:20.466 [2024-12-12 10:39:54.363448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663472 ] 00:26:20.466 [2024-12-12 10:39:54.436894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.466 [2024-12-12 10:39:54.475248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.725 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.725 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:20.725 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.725 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.984 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:20.984 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.984 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.984 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.984 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.984 10:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.246 nvme0n1 00:26:21.246 10:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:21.246 10:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.246 10:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.246 10:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.246 10:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:21.246 10:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.246 Running I/O for 2 seconds... 00:26:21.246 [2024-12-12 10:39:55.155677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.246 [2024-12-12 10:39:55.155820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.246 [2024-12-12 10:39:55.155846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.246 [2024-12-12 10:39:55.165074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.246 [2024-12-12 10:39:55.165202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.246 [2024-12-12 10:39:55.165223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.246 [2024-12-12 10:39:55.174438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.246 [2024-12-12 10:39:55.174575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.246 [2024-12-12 10:39:55.174595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.246 [2024-12-12 10:39:55.184103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.246 [2024-12-12 10:39:55.184232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.246 [2024-12-12 10:39:55.184252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.246 [2024-12-12 10:39:55.193447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.246 [2024-12-12 10:39:55.193577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.246 [2024-12-12 10:39:55.193596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.246 [2024-12-12 10:39:55.202684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.246 [2024-12-12 10:39:55.202811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.246 [2024-12-12 10:39:55.202830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.247 [2024-12-12 10:39:55.211989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.247 [2024-12-12 10:39:55.212114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.247 [2024-12-12 10:39:55.212133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.247 [2024-12-12 10:39:55.221334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.247 [2024-12-12 10:39:55.221459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.247 [2024-12-12 10:39:55.221477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.247 [2024-12-12 10:39:55.230626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.247 [2024-12-12 10:39:55.230751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.247 [2024-12-12 10:39:55.230769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.247 [2024-12-12 10:39:55.239975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.247 [2024-12-12 10:39:55.240100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.247 [2024-12-12 10:39:55.240119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.247 [2024-12-12 10:39:55.249297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.247 [2024-12-12 10:39:55.249420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.247 [2024-12-12 10:39:55.249439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.247 [2024-12-12 10:39:55.258606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.247 [2024-12-12 10:39:55.258729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.247 [2024-12-12 10:39:55.258748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.247 [2024-12-12 10:39:55.268153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.247 [2024-12-12 10:39:55.268279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.247 [2024-12-12 10:39:55.268296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.508 [2024-12-12 10:39:55.277756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.508 [2024-12-12 10:39:55.277883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.508 [2024-12-12 10:39:55.277902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.508 [2024-12-12 10:39:55.287295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.508 [2024-12-12 10:39:55.287419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.508 [2024-12-12 10:39:55.287438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.508 [2024-12-12 10:39:55.296602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.508 [2024-12-12 10:39:55.296728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.508 [2024-12-12 10:39:55.296746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.508 [2024-12-12 10:39:55.305895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.508 [2024-12-12 10:39:55.306017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.508 [2024-12-12 10:39:55.306035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.508 [2024-12-12 10:39:55.315192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.508 [2024-12-12 10:39:55.315315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.508 [2024-12-12 10:39:55.315333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.508 [2024-12-12 10:39:55.324513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.508 [2024-12-12 10:39:55.324645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.508 [2024-12-12 10:39:55.324663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.508 [2024-12-12 10:39:55.333827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.508 [2024-12-12 10:39:55.333951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.508 [2024-12-12 10:39:55.333972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.508 [2024-12-12 10:39:55.343244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.508 [2024-12-12 10:39:55.343371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.343390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.352600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.352723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.352740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.361904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.362028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.362046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.371346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.371470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.371488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.380679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.380804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.380821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.390241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.390364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.390383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.399543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.399677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.399695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.408842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.408965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.408982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.418388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.418517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.418535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.427718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.427842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.427861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.437012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.437135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.437152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.446361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.446484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.446502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.455671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.455793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.455810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.464969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.465093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.465110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.474261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.474384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.474401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.483557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.483684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.483702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.492846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.492970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.492989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.502185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.502307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.502325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.511466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.511590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.511608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.520771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.520893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.520911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.509 [2024-12-12 10:39:55.530173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.509 [2024-12-12 10:39:55.530298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.509 [2024-12-12 10:39:55.530315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.768 [2024-12-12 10:39:55.539733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.768 [2024-12-12 10:39:55.539866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.768 [2024-12-12 10:39:55.539883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.768 [2024-12-12 10:39:55.549114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.768 [2024-12-12 10:39:55.549238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.768 [2024-12-12 10:39:55.549256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.768 [2024-12-12 10:39:55.558414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.768 [2024-12-12 10:39:55.558535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.768 [2024-12-12 10:39:55.558553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.768 [2024-12-12 10:39:55.567717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.768 [2024-12-12 10:39:55.567842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.768 [2024-12-12 10:39:55.567859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.768 [2024-12-12 10:39:55.577022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.768 [2024-12-12 10:39:55.577146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.768 [2024-12-12 10:39:55.577167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.768 [2024-12-12 10:39:55.586308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.768 [2024-12-12 10:39:55.586431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.768 [2024-12-12 10:39:55.586449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.768 [2024-12-12 10:39:55.595592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.768 [2024-12-12 10:39:55.595717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.595735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.604894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.605016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.605034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.614188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.614312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.614330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.623481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.623611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.623629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.632757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.632880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.632897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.642061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.642183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.642200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.651315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.651438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.651456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.660640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.660772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.660791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.670156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.670281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.670300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.679477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.679607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.679625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.688755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.688880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.688899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.698051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.698175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.698194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.707342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.707465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.707482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.716632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.716759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.716776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.725936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.726056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.726074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.735212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.735336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.735353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.744488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.744615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.744633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.753782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.753905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.753923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.763080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.763202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.763219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.772369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.772492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.772511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.769 [2024-12-12 10:39:55.781670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:21.769 [2024-12-12 10:39:55.781792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.769 [2024-12-12 10:39:55.781809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.028 [2024-12-12 10:39:55.791115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.028 [2024-12-12 10:39:55.791240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.028 [2024-12-12 10:39:55.791257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.028 [2024-12-12 10:39:55.800559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.028 [2024-12-12 10:39:55.800689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.028 [2024-12-12 10:39:55.800706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.028 [2024-12-12 10:39:55.809846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.028 [2024-12-12 10:39:55.809969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.028 [2024-12-12 10:39:55.809986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.028 [2024-12-12 10:39:55.819125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.028 [2024-12-12 10:39:55.819246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.028 [2024-12-12 10:39:55.819267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.028 [2024-12-12 10:39:55.828419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.028 [2024-12-12 10:39:55.828545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.028 [2024-12-12 10:39:55.828563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.028 [2024-12-12 10:39:55.837716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.837841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.837859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.846994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.847115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.847133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.856271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.856396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.856413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.865585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.865709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.865727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.874855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.874979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.874996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.884174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.884297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.884315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.893652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.893777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.893795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.902936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.903062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.903080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.912228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.912352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.912370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.921812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.921933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.921951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.931084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.931206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.931225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.940371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.940491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.940509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.949651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.949775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.949793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.958937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.959059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.959077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.968227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.968350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.968368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.977525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.977656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.977674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.986829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.986949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.986967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:55.996101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:55.996223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:55.996240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:56.005405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:56.005527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:56.005545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:56.014684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:56.014807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:56.014825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:56.023992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:56.024113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:56.024131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:56.033250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:56.033371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:56.033388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.029 [2024-12-12 10:39:56.042551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.029 [2024-12-12 10:39:56.042681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.029 [2024-12-12 10:39:56.042699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.288 [2024-12-12 10:39:56.051993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.052118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.052137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.061428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.061550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.061575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.070724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.070845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.070863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.080014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.080138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.080156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.089312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.089437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.089456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.098604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.098728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.098745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.107899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.108020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.108038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.117188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.117311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.117328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.126473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.126603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.126621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.135770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.135891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.135908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.145064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.145192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.145210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 27303.00 IOPS, 106.65 MiB/s [2024-12-12T09:39:56.312Z] [2024-12-12 10:39:56.154371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.154493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.154511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.163694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.163817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.163835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.173222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.173347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.173368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.182881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.183006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.183025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.192179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.192302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.192320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.201481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.201611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.201629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.210773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.210895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.210912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.220068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.220192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.220213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.229361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.229483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.229501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.238664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.238788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.238805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.247938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.248061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.248079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.257257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.257377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.257395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.266532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.266661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.266678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.275849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.275971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.275989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.285134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.289 [2024-12-12 10:39:56.285256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.289 [2024-12-12 10:39:56.285275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.289 [2024-12-12 10:39:56.294450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.290 [2024-12-12 10:39:56.294578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.290 [2024-12-12 10:39:56.294597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.290 [2024-12-12 10:39:56.303731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.290 [2024-12-12 10:39:56.303857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.290 [2024-12-12 10:39:56.303875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.313306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.313428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.313446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.322737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.322862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.322880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.332180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.332307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.332326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.341643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.341768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.341788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.350953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.351076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.351094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.360399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.360525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.360544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.369779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.369904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.369921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.379188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.379311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.379329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.388480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.388612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.388632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.397798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.397922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.397939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.407099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.407223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.407240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.416400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.416526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.416543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.425961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.426088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.426106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.435295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.435419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.435437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.444674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.444797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.444814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.453979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.454103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.454120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.463303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.463429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.463449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.472623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.472749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.472766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.481938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.482062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.482079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.491241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.491365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.491383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.500556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.500687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.500705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.549 [2024-12-12 10:39:56.509878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.549 [2024-12-12 10:39:56.510002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.549 [2024-12-12 10:39:56.510020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.550 [2024-12-12 10:39:56.519240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.550 [2024-12-12 10:39:56.519363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.550 [2024-12-12 10:39:56.519383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.550 [2024-12-12 10:39:56.528545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.550 [2024-12-12 10:39:56.528677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.550 [2024-12-12 10:39:56.528695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.550 [2024-12-12 10:39:56.537854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.550 [2024-12-12 10:39:56.537979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.550 [2024-12-12 10:39:56.537997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.550 [2024-12-12 10:39:56.547154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.550 [2024-12-12 10:39:56.547283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.550 [2024-12-12 10:39:56.547301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.550 [2024-12-12 10:39:56.556468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.550 [2024-12-12 10:39:56.556595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.550 [2024-12-12 10:39:56.556613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.550 [2024-12-12 10:39:56.565779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.550 [2024-12-12 10:39:56.565904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.550 [2024-12-12 10:39:56.565921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.575347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.575471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.575489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.584705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.584829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.584846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.593992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.594115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.594133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.603293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.603416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.603435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.612604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.612727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.612745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.621899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.622020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.622038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.631196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.631321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.631340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.640507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.640637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.640655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.649810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.649932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.649950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.659153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.659276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.659294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.668430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.668555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.668579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.677992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.678119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.678137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.687320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.687444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.687463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.696625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.696750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.696769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.705922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.706046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.706066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.715233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.715355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.715373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.724527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.809 [2024-12-12 10:39:56.724657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.809 [2024-12-12 10:39:56.724676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.809 [2024-12-12 10:39:56.733851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.733975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.733993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.743289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.743415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.743433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.752635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.752759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.752777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.761926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.762048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.762065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.771239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.771363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.771380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.780539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.780668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.780686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.789840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.789966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.789983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.799129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.799251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.799268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.808447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.808575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.808593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.817743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.817868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.817885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.810 [2024-12-12 10:39:56.827050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:22.810 [2024-12-12 10:39:56.827177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.810 [2024-12-12 10:39:56.827195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.836608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.836734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.836752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.845987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.846111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.846128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.855270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.855392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.855410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.864560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.864690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.864708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.873839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.873962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.873980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.883151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.883275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.883292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.892439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.892562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.892585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.901744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.901868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.901885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.911275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.911398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.911416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.920560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.920690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.920708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.930104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.930230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.930248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.939416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.939541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.939559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.948848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.948969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.948992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.958148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.958273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.958290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.967493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.967621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.967640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.976788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.976911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.976929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.986072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.986196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.986212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:56.995360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:56.995482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:56.995500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:57.004655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:57.004779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:57.004796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:57.013976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:57.014102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:57.014119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:57.023296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:57.023419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:57.023437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:57.032667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:57.032794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:57.032812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:57.041950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:57.042073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:57.042091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.070 [2024-12-12 10:39:57.051236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.070 [2024-12-12 10:39:57.051358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.070 [2024-12-12 10:39:57.051375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.071 [2024-12-12 10:39:57.060527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.071 [2024-12-12 10:39:57.060655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.071 [2024-12-12 10:39:57.060672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.071 [2024-12-12 10:39:57.069812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.071 [2024-12-12 10:39:57.069935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.071 [2024-12-12 10:39:57.069953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.071 [2024-12-12 10:39:57.079122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.071 [2024-12-12 10:39:57.079245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.071 [2024-12-12 10:39:57.079263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.071 [2024-12-12 10:39:57.088425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.071 [2024-12-12 10:39:57.088554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.071 [2024-12-12 10:39:57.088577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.330 [2024-12-12 10:39:57.098011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.330 [2024-12-12 10:39:57.098134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.330 [2024-12-12 10:39:57.098151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.330 [2024-12-12 10:39:57.107281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.330 [2024-12-12 10:39:57.107405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.330 [2024-12-12 10:39:57.107422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.330 [2024-12-12 10:39:57.116594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.330 [2024-12-12 10:39:57.116718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.330 [2024-12-12 10:39:57.116736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.330 [2024-12-12 10:39:57.125860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.330 [2024-12-12 10:39:57.125984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.330 [2024-12-12 10:39:57.126002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.330 [2024-12-12 10:39:57.135170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.330 [2024-12-12 10:39:57.135292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.330 [2024-12-12 10:39:57.135310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.330 [2024-12-12 10:39:57.144463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.330 [2024-12-12 10:39:57.144587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.330 [2024-12-12 10:39:57.144605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.330 27338.00 IOPS, 106.79 MiB/s [2024-12-12T09:39:57.353Z] [2024-12-12 10:39:57.153747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2020410) with pdu=0x200016efef90 00:26:23.330 [2024-12-12 10:39:57.153870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.330 [2024-12-12 10:39:57.153888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.330 00:26:23.330 Latency(us) 00:26:23.330 [2024-12-12T09:39:57.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.330 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:23.330 nvme0n1 : 2.01 27338.64 106.79 0.00 0.00 4674.19 3214.38 10423.34 00:26:23.330 [2024-12-12T09:39:57.353Z] =================================================================================================================== 00:26:23.330 [2024-12-12T09:39:57.353Z] Total : 27338.64 106.79 0.00 0.00 4674.19 3214.38 10423.34 00:26:23.330 { 00:26:23.330 "results": [ 00:26:23.330 { 00:26:23.330 "job": "nvme0n1", 00:26:23.330 "core_mask": "0x2", 00:26:23.330 "workload": "randwrite", 00:26:23.330 "status": "finished", 00:26:23.330 "queue_depth": 128, 00:26:23.330 "io_size": 4096, 00:26:23.330 "runtime": 2.005806, 00:26:23.330 "iops": 27338.635939866566, 00:26:23.330 "mibps": 106.79154664010377, 00:26:23.330 "io_failed": 0, 00:26:23.330 "io_timeout": 0, 00:26:23.330 "avg_latency_us": 4674.189065924715, 00:26:23.330 "min_latency_us": 3214.384761904762, 00:26:23.330 "max_latency_us": 10423.344761904762 00:26:23.330 } 00:26:23.330 ], 00:26:23.330 "core_count": 1 00:26:23.330 } 00:26:23.330 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:23.330 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:23.330 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:23.330 | .driver_specific 00:26:23.330 | .nvme_error 00:26:23.330 | .status_code 00:26:23.330 | .command_transient_transport_error' 00:26:23.330 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1663472 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1663472 ']' 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1663472 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1663472 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1663472' 00:26:23.590 killing process with pid 1663472 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1663472 00:26:23.590 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.590 00:26:23.590 Latency(us) 00:26:23.590 [2024-12-12T09:39:57.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.590 [2024-12-12T09:39:57.613Z] =================================================================================================================== 00:26:23.590 [2024-12-12T09:39:57.613Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1663472 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1663934 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1663934 /var/tmp/bperf.sock 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1663934 ']' 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.590 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.849 [2024-12-12 10:39:57.624729] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:23.849 [2024-12-12 10:39:57.624778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663934 ] 00:26:23.849 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.849 Zero copy mechanism will not be used. 00:26:23.849 [2024-12-12 10:39:57.700881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.849 [2024-12-12 10:39:57.736908] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.849 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.849 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:23.849 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.849 10:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.108 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:24.108 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.108 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.108 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.108 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.108 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.676 nvme0n1 00:26:24.676 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:24.676 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.676 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.676 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.676 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:24.676 10:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:24.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.676 Zero copy mechanism will not be used. 00:26:24.676 Running I/O for 2 seconds... 00:26:24.676 [2024-12-12 10:39:58.555130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.555196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.555226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.560283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.560389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.560413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.566420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.566560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.566592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.572725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.572826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.572846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.578396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.578488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.578507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.582866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.582922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.582940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.587183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.587252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.587271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.591650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.591718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.591737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.596360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.596416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.596434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.601432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.601495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.601513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.606391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.606454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.606472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.611170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.611240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.611258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.615910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.615965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.615983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.621026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.621093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.621111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.626538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.626619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.626637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.631283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.631371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.631390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.636111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.636175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.636193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.640902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.640970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.640988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.645642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.645700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.645718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.650058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.650123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.650147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.654803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.654857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.654874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.659329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.659381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.659398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.664608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.664674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.664692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.676 [2024-12-12 10:39:58.669848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.676 [2024-12-12 10:39:58.669906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.676 [2024-12-12 10:39:58.669924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.677 [2024-12-12 10:39:58.674383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.677 [2024-12-12 10:39:58.674448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.677 [2024-12-12 10:39:58.674466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.677 [2024-12-12 10:39:58.678761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.677 [2024-12-12 10:39:58.678850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.677 [2024-12-12 10:39:58.678868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.677 [2024-12-12 10:39:58.683329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.677 [2024-12-12 10:39:58.683382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.677 [2024-12-12 10:39:58.683399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.677 [2024-12-12 10:39:58.688240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.677 [2024-12-12 10:39:58.688307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.677 [2024-12-12 10:39:58.688326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.677 [2024-12-12 10:39:58.693105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.677 [2024-12-12 10:39:58.693162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.677 [2024-12-12 10:39:58.693183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.677 [2024-12-12 10:39:58.697831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.677 [2024-12-12 10:39:58.697895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.677 [2024-12-12 10:39:58.697914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.702482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.702556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.702586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.706878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.706944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.706962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.711026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.711136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.711155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.715273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.715330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.715348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.719651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.719715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.719734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.723872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.723933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.723951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.728067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.728126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.728143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.732254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.732313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.732331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.736594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.736657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.736675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.740727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.740781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.740799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.744905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.744964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.744983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.749031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.749086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.749103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.753235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.753291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.753308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.757426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.757485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.757503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.761557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.761621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.761639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.765712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.765762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.765784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.769850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.769904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.769922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.774013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.774069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.774087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.778131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.778188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.778205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.782371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.782436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.782454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.786530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.786609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.786627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.790666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.790730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.790748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.794789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.794856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.794873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.799011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.799080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.938 [2024-12-12 10:39:58.799098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.938 [2024-12-12 10:39:58.803160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.938 [2024-12-12 10:39:58.803224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.803246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.807305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.807369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.807387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.811525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.811583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.811601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.815656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.815710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.815729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.819915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.819971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.819989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.824942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.824998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.825016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.829459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.829514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.829532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.833739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.833800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.833818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.837939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.838010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.838027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.842168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.842231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.842249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.846321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.846385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.846403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.850480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.850543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.850561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.854605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.854668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.854686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.858810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.858873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.858891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.862981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.863035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.863053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.867106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.867181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.867199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.871314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.871374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.871391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.875461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.875516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.875537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.879640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.879697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.879715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.883836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.883895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.883913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.888032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.888091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.888110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.892305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.892357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.892375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.896479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.896539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.896557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.900800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.900856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.900874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.905481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.905607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.905626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.910224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.910287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.910304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.915798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.915853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.915874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.939 [2024-12-12 10:39:58.920534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.939 [2024-12-12 10:39:58.920620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-12-12 10:39:58.920638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.940 [2024-12-12 10:39:58.925185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.940 [2024-12-12 10:39:58.925235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-12-12 10:39:58.925253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.940 [2024-12-12 10:39:58.929750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.940 [2024-12-12 10:39:58.929808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-12-12 10:39:58.929826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.940 [2024-12-12 10:39:58.934159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.940 [2024-12-12 10:39:58.934219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-12-12 10:39:58.934236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.940 [2024-12-12 10:39:58.938688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.940 [2024-12-12 10:39:58.938767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-12-12 10:39:58.938785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.940 [2024-12-12 10:39:58.943250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.940 [2024-12-12 10:39:58.943305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-12-12 10:39:58.943323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.940 [2024-12-12 10:39:58.947515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.940 [2024-12-12 10:39:58.947567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-12-12 10:39:58.947592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.940 [2024-12-12 10:39:58.951949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.940 [2024-12-12 10:39:58.952016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-12-12 10:39:58.952033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.940 [2024-12-12 10:39:58.956446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:24.940 [2024-12-12 10:39:58.956510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-12-12 10:39:58.956529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:58.961696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:58.961750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:58.961768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:58.967460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:58.967579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:58.967597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:58.972387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:58.972460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:58.972478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:58.977084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:58.977137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:58.977154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:58.981610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:58.981675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:58.981692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:58.986131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:58.986200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:58.986218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:58.990590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:58.990656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:58.990673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:58.995069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:58.995123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:58.995141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:58.999832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:58.999925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:58.999943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:59.004231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:59.004292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:59.004309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:59.008489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:59.008559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:59.008582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:59.013239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:59.013294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.200 [2024-12-12 10:39:59.013312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.200 [2024-12-12 10:39:59.017959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.200 [2024-12-12 10:39:59.018013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.018031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.023098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.023172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.023191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.028125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.028197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.028215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.033458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.033514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.033532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.038605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.038668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.038689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.043336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.043431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.043449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.047986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.048041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.048059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.052711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.052827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.052845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.057708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.057774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.057791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.062264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.062320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.062338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.066895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.066967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.066984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.071485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.071586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.071604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.075908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.075963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.075980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.080016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.080077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.080095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.084430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.084532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.084550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.089027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.089097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.089114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.094170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.094229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.094247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.099140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.099199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.099216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.104683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.104736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.104753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.109942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.109997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.110014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.114666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.114729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.114748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.119425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.119485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.119507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.123993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.124046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.124064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.128337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.128399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.128417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.132604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.132663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.132681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.137425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.137490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.137508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.142096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.142160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.142178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.146361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.201 [2024-12-12 10:39:59.146454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.201 [2024-12-12 10:39:59.146472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.201 [2024-12-12 10:39:59.150813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.150865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.150883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.155618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.155671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.155689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.160698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.160759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.160780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.165554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.165622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.165640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.170256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.170325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.170343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.174884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.174948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.174966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.179504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.179557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.179581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.184490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.184567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.184592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.189176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.189256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.189273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.193677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.193734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.193752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.198028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.198086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.198104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.202389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.202467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.202485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.207295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.207363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.207381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.213673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.213730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.213747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.202 [2024-12-12 10:39:59.218660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.202 [2024-12-12 10:39:59.218738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.202 [2024-12-12 10:39:59.218756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.462 [2024-12-12 10:39:59.223528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.462 [2024-12-12 10:39:59.223613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.462 [2024-12-12 10:39:59.223633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.462 [2024-12-12 10:39:59.228164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.462 [2024-12-12 10:39:59.228241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.462 [2024-12-12 10:39:59.228259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.462 [2024-12-12 10:39:59.232785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.462 [2024-12-12 10:39:59.232860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.462 [2024-12-12 10:39:59.232879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.462 [2024-12-12 10:39:59.237388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.462 [2024-12-12 10:39:59.237455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.462 [2024-12-12 10:39:59.237475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.462 [2024-12-12 10:39:59.242311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.462 [2024-12-12 10:39:59.242373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.462 [2024-12-12 10:39:59.242391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.462 [2024-12-12 10:39:59.247118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.462 [2024-12-12 10:39:59.247186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.462 [2024-12-12 10:39:59.247204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.462 [2024-12-12 10:39:59.251540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.462 [2024-12-12 10:39:59.251601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.462 [2024-12-12 10:39:59.251619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.462 [2024-12-12 10:39:59.255870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.462 [2024-12-12 10:39:59.255936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.462 [2024-12-12 10:39:59.255954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.260157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.260210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.260228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.264624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.264684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.264702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.268990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.269044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.269062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.273260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.273319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.273337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.277888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.277961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.277978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.282542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.282604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.282625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.287817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.287879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.287897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.292806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.292870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.292887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.297712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.297765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.297783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.303435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.303497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.303515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.308626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.308680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.308698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.313604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.313669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.313687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.319100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.319166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.319185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.323905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.323959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.323978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.328532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.328606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.328626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.333749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.333807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.333825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.340039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.340098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.340117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.345004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.345124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.345141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.349636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.349722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.349740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.353858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.353920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.353938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.358096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.358159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.358177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.362248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.362318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.362336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.366620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.366682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.366701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.370816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.370883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.370901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.375017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.375074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.375092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.379319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.379371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.463 [2024-12-12 10:39:59.379389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.463 [2024-12-12 10:39:59.383512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.463 [2024-12-12 10:39:59.383588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.383607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.387815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.387870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.387889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.392086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.392142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.392160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.396244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.396298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.396315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.400401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.400464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.400481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.405257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.405373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.405395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.410899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.411025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.411043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.417735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.417802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.417820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.424878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.425023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.425042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.432621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.432715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.432734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.438657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.438753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.438772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.443970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.444023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.444041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.449080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.449136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.449153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.454173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.454245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.454263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.459151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.459205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.459223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.464503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.464564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.464588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.469918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.469976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.469994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.475878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.475961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.475979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.464 [2024-12-12 10:39:59.481830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.464 [2024-12-12 10:39:59.481885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.464 [2024-12-12 10:39:59.481903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.487121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.487175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.487194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.492183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.492252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.492271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.497258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.497333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.497351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.502352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.502500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.502518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.508262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.508324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.508343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.513495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.513565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.513589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.518356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.518414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.518432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.523652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.523713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.523730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.528789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.528849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.528866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.534279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.534368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.534387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.539536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.539619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.539637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.544560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.544638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.544656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.724 6503.00 IOPS, 812.88 MiB/s [2024-12-12T09:39:59.747Z] [2024-12-12 10:39:59.550474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.550554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.550578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.555438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.555514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.555532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.560341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.560399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.560417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.565219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.565285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.565303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.570938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.571008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.571026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.575922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.575990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.576008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.580815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.724 [2024-12-12 10:39:59.580872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.724 [2024-12-12 10:39:59.580889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.724 [2024-12-12 10:39:59.585383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.585453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.585471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.590223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.590291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.590310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.595028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.595119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.595138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.600433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.600514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.600532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.605481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.605535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.605552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.610496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.610551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.610574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.615554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.615618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.615636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.620673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.620732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.620749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.627296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.627456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.627475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.634655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.634747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.634766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.641434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.641532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.641556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.648054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.648163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.648181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.654362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.654442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.654460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.660607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.660691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.660709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.665353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.665404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.665421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.670022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.670088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.670106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.674634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.674705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.674722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.678973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.679026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.679043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.683513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.683568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.683592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.687960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.688028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.688046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.692654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.692715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.692733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.697215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.697277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.697296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.701531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.701595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.701613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.705905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.705955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.705973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.710484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.710544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.710562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.715078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.715145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.725 [2024-12-12 10:39:59.715162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.725 [2024-12-12 10:39:59.719370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.725 [2024-12-12 10:39:59.719425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.726 [2024-12-12 10:39:59.719443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.726 [2024-12-12 10:39:59.723614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.726 [2024-12-12 10:39:59.723734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.726 [2024-12-12 10:39:59.723751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.726 [2024-12-12 10:39:59.727829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.726 [2024-12-12 10:39:59.727882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.726 [2024-12-12 10:39:59.727900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.726 [2024-12-12 10:39:59.732112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.726 [2024-12-12 10:39:59.732178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.726 [2024-12-12 10:39:59.732196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.726 [2024-12-12 10:39:59.736339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.726 [2024-12-12 10:39:59.736400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.726 [2024-12-12 10:39:59.736417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.726 [2024-12-12 10:39:59.740622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.726 [2024-12-12 10:39:59.740680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.726 [2024-12-12 10:39:59.740698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.726 [2024-12-12 10:39:59.744893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.726 [2024-12-12 10:39:59.744952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.726 [2024-12-12 10:39:59.744971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.749141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.749203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.749222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.753412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.753484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.753502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.757693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.757747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.757765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.761903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.761960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.761981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.766085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.766145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.766163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.770300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.770356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.770374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.774523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.774582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.774600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.778729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.778785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.778803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.782933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.783003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.783021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.787129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.787193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.787211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.791305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.791363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.791381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.795458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.795524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.986 [2024-12-12 10:39:59.795541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.986 [2024-12-12 10:39:59.799661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.986 [2024-12-12 10:39:59.799723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.799742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.803844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.803905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.803922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.808057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.808117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.808135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.812253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.812306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.812324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.816522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.816619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.816638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.821257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.821318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.821336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.825864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.825949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.825967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.831448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.831589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.831608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.837546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.837705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.837723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.843641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.843812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.843830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.851014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.851200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.851219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.857523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.857698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.857716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.863947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.864097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.864115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.870451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.870583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.870601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.876814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.876973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.876990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.883235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.883371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.883389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.890295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.890449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.890467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.896387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.896546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.896568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.902762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.902963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.902981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.908880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.909028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.909046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.915411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.915566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.915591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.921828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.921972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.921991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.927446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.927602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.927621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.932208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.932325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.932343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.937756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.937836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.937854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.942979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.943084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.943101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.947801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.947870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.947892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.987 [2024-12-12 10:39:59.953205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.987 [2024-12-12 10:39:59.953280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.987 [2024-12-12 10:39:59.953299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:39:59.957692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:39:59.957760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:39:59.957778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:39:59.962128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:39:59.962199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:39:59.962217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:39:59.966908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:39:59.967015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:39:59.967033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:39:59.972622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:39:59.972767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:39:59.972785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:39:59.978798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:39:59.978901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:39:59.978919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:39:59.985280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:39:59.985421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:39:59.985439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:39:59.990815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:39:59.990923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:39:59.990941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:39:59.995409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:39:59.995495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:39:59.995514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:40:00.000139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:40:00.000197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:40:00.000216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.988 [2024-12-12 10:40:00.005668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:25.988 [2024-12-12 10:40:00.005894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.988 [2024-12-12 10:40:00.005916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.012281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.012351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.012372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.017714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.017813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.017848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.023050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.023147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.023166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.030104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.030268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.030302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.035347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.035437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.035456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.040753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.040809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.040834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.046001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.046068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.046087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.051043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.051105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.051123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.055854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.055941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.055960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.061473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.061546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.061567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.066484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.066561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.066586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.071508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.071562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.071585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.076790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.076853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.076871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.081445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.081503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.081520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.248 [2024-12-12 10:40:00.086340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.248 [2024-12-12 10:40:00.086428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.248 [2024-12-12 10:40:00.086453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.091516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.091608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.091627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.096405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.096508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.096526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.101782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.101894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.101913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.107025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.107084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.107102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.112899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.112957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.112977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.118064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.118134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.118154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.122939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.123020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.123038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.127952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.128009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.128027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.132889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.132968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.132987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.137643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.137703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.137722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.142099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.142205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.142224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.146524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.146585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.146604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.151335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.151386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.151405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.156402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.156456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.156474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.161386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.161442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.161461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.165860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.165914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.165932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.170214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.170293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.170316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.174432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.174486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.174504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.178709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.178781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.178799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.183302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.183356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.183375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.187749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.187855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.187874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.192168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.192224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.192243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.196401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.196460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.196479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.200722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.200809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.200827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.205186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.205242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.205260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.209346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.209399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.209421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.213528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.213591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.249 [2024-12-12 10:40:00.213609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.249 [2024-12-12 10:40:00.217770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.249 [2024-12-12 10:40:00.217824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.217843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.221917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.221986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.222004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.226103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.226159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.226178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.230289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.230349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.230368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.234558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.234668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.234686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.239104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.239158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.239176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.244443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.244508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.244526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.249224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.249298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.249317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.253674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.253773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.253791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.258105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.258159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.258177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.262533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.262639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.262659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.250 [2024-12-12 10:40:00.267084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.250 [2024-12-12 10:40:00.267141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.250 [2024-12-12 10:40:00.267159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.271645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.271730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.271748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.276131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.276186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.276204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.280450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.280512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.280530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.284919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.284972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.284994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.289302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.289448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.289467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.294460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.294611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.294629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.300714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.300897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.300915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.306285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.306415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.306433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.312188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.312308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.312326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.317532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.317600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.317620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.322226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.322288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.322305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.327132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.327188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.327206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.332122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.332256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.332278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.337999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.338141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.338160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.345355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.345419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.345438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.351352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.351516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.351534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.356651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.356780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.356798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.362146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.362234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.362253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.367084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.367168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.367187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.372756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.372901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.372920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.379177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.379288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.379306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.384616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.384747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.384765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.389800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.389897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.389915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.394233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.394312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.394331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.398402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.398462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.511 [2024-12-12 10:40:00.398481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.511 [2024-12-12 10:40:00.402534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.511 [2024-12-12 10:40:00.402609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.402627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.406645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.406705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.406723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.410899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.410953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.410971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.415307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.415367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.415385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.420087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.420269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.420291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.425304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.425369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.425387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.430130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.430199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.430217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.434593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.434655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.434673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.439042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.439098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.439117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.443436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.443514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.443532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.448074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.448159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.448177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.452433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.452493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.452511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.456651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.456725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.456744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.461079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.461183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.461204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.466005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.466056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.466075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.470872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.470928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.470946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.475745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.475801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.475819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.480724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.480787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.480805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.485676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.485743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.485762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.490245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.490312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.490330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.494555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.494625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.494643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.498785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.498843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.498861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.503105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.503164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.503183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.507447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.507527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.507546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.511864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.511926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.511944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.516255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.516350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.516368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.520580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.520650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.520669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.524938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.512 [2024-12-12 10:40:00.525017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.512 [2024-12-12 10:40:00.525035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.512 [2024-12-12 10:40:00.529351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.513 [2024-12-12 10:40:00.529430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.513 [2024-12-12 10:40:00.529448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.771 [2024-12-12 10:40:00.534506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.771 [2024-12-12 10:40:00.534562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.771 [2024-12-12 10:40:00.534588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.771 [2024-12-12 10:40:00.539427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.771 [2024-12-12 10:40:00.539519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.771 [2024-12-12 10:40:00.539541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:26.771 [2024-12-12 10:40:00.543904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.771 [2024-12-12 10:40:00.544015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.771 [2024-12-12 10:40:00.544033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:26.771 [2024-12-12 10:40:00.548310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.771 [2024-12-12 10:40:00.548385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.771 [2024-12-12 10:40:00.548404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:26.771 6378.50 IOPS, 797.31 MiB/s [2024-12-12T09:40:00.794Z] [2024-12-12 10:40:00.553377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20208f0) with pdu=0x200016eff3c8 00:26:26.771 [2024-12-12 10:40:00.553438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.771 [2024-12-12 10:40:00.553457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:26.771 00:26:26.771 Latency(us) 00:26:26.771 [2024-12-12T09:40:00.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.771 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:26.771 nvme0n1 : 2.00 6377.15 797.14 0.00 0.00 2504.42 1497.97 12358.22 00:26:26.771 [2024-12-12T09:40:00.794Z] =================================================================================================================== 00:26:26.771 [2024-12-12T09:40:00.794Z] Total : 6377.15 797.14 0.00 0.00 2504.42 1497.97 12358.22 00:26:26.771 { 00:26:26.771 "results": [ 00:26:26.771 { 00:26:26.771 "job": "nvme0n1", 00:26:26.771 "core_mask": "0x2", 00:26:26.771 "workload": "randwrite", 00:26:26.771 "status": "finished", 00:26:26.771 "queue_depth": 16, 00:26:26.771 "io_size": 131072, 00:26:26.771 "runtime": 2.003717, 00:26:26.771 "iops": 6377.148070311327, 00:26:26.771 "mibps": 797.1435087889158, 00:26:26.771 "io_failed": 0, 00:26:26.771 "io_timeout": 0, 00:26:26.771 "avg_latency_us": 2504.417229017135, 00:26:26.771 "min_latency_us": 1497.9657142857143, 00:26:26.771 "max_latency_us": 12358.217142857144 00:26:26.771 } 00:26:26.771 ], 00:26:26.771 "core_count": 1 00:26:26.771 } 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:26.771 | .driver_specific 00:26:26.771 | .nvme_error 00:26:26.771 | .status_code 00:26:26.771 | .command_transient_transport_error' 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 413 > 0 )) 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1663934 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1663934 ']' 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1663934 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.771 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1663934 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1663934' 00:26:27.030 killing process with pid 1663934 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1663934 00:26:27.030 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.030 00:26:27.030 Latency(us) 00:26:27.030 [2024-12-12T09:40:01.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.030 [2024-12-12T09:40:01.053Z] =================================================================================================================== 00:26:27.030 [2024-12-12T09:40:01.053Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1663934 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1662308 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1662308 ']' 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1662308 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.030 10:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662308 00:26:27.030 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:27.030 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:27.030 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662308' 00:26:27.030 killing process with pid 1662308 00:26:27.030 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1662308 00:26:27.030 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1662308 00:26:27.289 00:26:27.290 real 0m13.897s 00:26:27.290 user 0m26.544s 00:26:27.290 sys 0m4.642s 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.290 ************************************ 00:26:27.290 END TEST nvmf_digest_error 00:26:27.290 ************************************ 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.290 rmmod nvme_tcp 00:26:27.290 rmmod nvme_fabrics 00:26:27.290 rmmod nvme_keyring 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1662308 ']' 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1662308 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1662308 ']' 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1662308 00:26:27.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1662308) - No such process 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1662308 is not found' 00:26:27.290 Process with pid 1662308 is not found 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.290 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:27.549 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:27.549 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.549 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.549 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.549 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.549 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.549 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.549 10:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.453 00:26:29.453 real 0m36.768s 00:26:29.453 user 0m55.426s 00:26:29.453 sys 0m13.890s 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:29.453 ************************************ 00:26:29.453 END TEST nvmf_digest 00:26:29.453 ************************************ 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.453 ************************************ 00:26:29.453 START TEST nvmf_bdevperf 00:26:29.453 ************************************ 00:26:29.453 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:29.713 * Looking for test storage... 00:26:29.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:29.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.713 --rc genhtml_branch_coverage=1 00:26:29.713 --rc genhtml_function_coverage=1 00:26:29.713 --rc genhtml_legend=1 00:26:29.713 --rc geninfo_all_blocks=1 00:26:29.713 --rc geninfo_unexecuted_blocks=1 00:26:29.713 00:26:29.713 ' 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:29.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.713 --rc genhtml_branch_coverage=1 00:26:29.713 --rc genhtml_function_coverage=1 00:26:29.713 --rc genhtml_legend=1 00:26:29.713 --rc geninfo_all_blocks=1 00:26:29.713 --rc geninfo_unexecuted_blocks=1 00:26:29.713 00:26:29.713 ' 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:29.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.713 --rc genhtml_branch_coverage=1 00:26:29.713 --rc genhtml_function_coverage=1 00:26:29.713 --rc genhtml_legend=1 00:26:29.713 --rc geninfo_all_blocks=1 00:26:29.713 --rc geninfo_unexecuted_blocks=1 00:26:29.713 00:26:29.713 ' 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:29.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.713 --rc genhtml_branch_coverage=1 00:26:29.713 --rc genhtml_function_coverage=1 00:26:29.713 --rc genhtml_legend=1 00:26:29.713 --rc geninfo_all_blocks=1 00:26:29.713 --rc geninfo_unexecuted_blocks=1 00:26:29.713 00:26:29.713 ' 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.713 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.714 10:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:36.286 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.286 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:36.287 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:36.287 Found net devices under 0000:af:00.0: cvl_0_0 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:36.287 Found net devices under 0000:af:00.1: cvl_0_1 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:26:36.287 00:26:36.287 --- 10.0.0.2 ping statistics --- 00:26:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.287 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:26:36.287 00:26:36.287 --- 10.0.0.1 ping statistics --- 00:26:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.287 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1668078 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1668078 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1668078 ']' 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.287 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.287 [2024-12-12 10:40:09.580332] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:36.287 [2024-12-12 10:40:09.580380] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.287 [2024-12-12 10:40:09.659559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.287 [2024-12-12 10:40:09.700793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.287 [2024-12-12 10:40:09.700827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.288 [2024-12-12 10:40:09.700836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.288 [2024-12-12 10:40:09.700842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.288 [2024-12-12 10:40:09.700847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.288 [2024-12-12 10:40:09.702083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.288 [2024-12-12 10:40:09.702189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.288 [2024-12-12 10:40:09.702190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.288 [2024-12-12 10:40:09.838510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.288 Malloc0 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.288 [2024-12-12 10:40:09.896305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.288 { 00:26:36.288 "params": { 00:26:36.288 "name": "Nvme$subsystem", 00:26:36.288 "trtype": "$TEST_TRANSPORT", 00:26:36.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.288 "adrfam": "ipv4", 00:26:36.288 "trsvcid": "$NVMF_PORT", 00:26:36.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.288 "hdgst": ${hdgst:-false}, 00:26:36.288 "ddgst": ${ddgst:-false} 00:26:36.288 }, 00:26:36.288 "method": "bdev_nvme_attach_controller" 00:26:36.288 } 00:26:36.288 EOF 00:26:36.288 )") 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:36.288 10:40:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:36.288 "params": { 00:26:36.288 "name": "Nvme1", 00:26:36.288 "trtype": "tcp", 00:26:36.288 "traddr": "10.0.0.2", 00:26:36.288 "adrfam": "ipv4", 00:26:36.288 "trsvcid": "4420", 00:26:36.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.288 "hdgst": false, 00:26:36.288 "ddgst": false 00:26:36.288 }, 00:26:36.288 "method": "bdev_nvme_attach_controller" 00:26:36.288 }' 00:26:36.288 [2024-12-12 10:40:09.949526] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:36.288 [2024-12-12 10:40:09.949580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668101 ] 00:26:36.288 [2024-12-12 10:40:10.026158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.288 [2024-12-12 10:40:10.073000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.288 Running I/O for 1 seconds... 00:26:37.666 11242.00 IOPS, 43.91 MiB/s 00:26:37.666 Latency(us) 00:26:37.666 [2024-12-12T09:40:11.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.666 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.666 Verification LBA range: start 0x0 length 0x4000 00:26:37.666 Nvme1n1 : 1.01 11310.66 44.18 0.00 0.00 11268.10 1474.56 13294.45 00:26:37.666 [2024-12-12T09:40:11.689Z] =================================================================================================================== 00:26:37.666 [2024-12-12T09:40:11.689Z] Total : 11310.66 44.18 0.00 0.00 11268.10 1474.56 13294.45 00:26:37.666 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1668326 00:26:37.666 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:37.666 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:37.666 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:37.666 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:37.667 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:37.667 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:37.667 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:37.667 { 00:26:37.667 "params": { 00:26:37.667 "name": "Nvme$subsystem", 00:26:37.667 "trtype": "$TEST_TRANSPORT", 00:26:37.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.667 "adrfam": "ipv4", 00:26:37.667 "trsvcid": "$NVMF_PORT", 00:26:37.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.667 "hdgst": ${hdgst:-false}, 00:26:37.667 "ddgst": ${ddgst:-false} 00:26:37.667 }, 00:26:37.667 "method": "bdev_nvme_attach_controller" 00:26:37.667 } 00:26:37.667 EOF 00:26:37.667 )") 00:26:37.667 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:37.667 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:37.667 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:37.667 10:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:37.667 "params": { 00:26:37.667 "name": "Nvme1", 00:26:37.667 "trtype": "tcp", 00:26:37.667 "traddr": "10.0.0.2", 00:26:37.667 "adrfam": "ipv4", 00:26:37.667 "trsvcid": "4420", 00:26:37.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:37.667 "hdgst": false, 00:26:37.667 "ddgst": false 00:26:37.667 }, 00:26:37.667 "method": "bdev_nvme_attach_controller" 00:26:37.667 }' 00:26:37.667 [2024-12-12 10:40:11.490156] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:37.667 [2024-12-12 10:40:11.490209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668326 ] 00:26:37.667 [2024-12-12 10:40:11.562680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.667 [2024-12-12 10:40:11.600193] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.925 Running I/O for 15 seconds... 00:26:39.872 11351.00 IOPS, 44.34 MiB/s [2024-12-12T09:40:14.465Z] 11439.50 IOPS, 44.69 MiB/s [2024-12-12T09:40:14.465Z] 10:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1668078 00:26:40.442 10:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:40.442 [2024-12-12 10:40:14.459319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.442 [2024-12-12 10:40:14.459837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.442 [2024-12-12 10:40:14.459846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.459861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.459876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.459891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.459910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.459925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.459939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.459956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.459972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.459986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.459993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.443 [2024-12-12 10:40:14.460397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.443 [2024-12-12 10:40:14.460404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.460990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.460997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.461006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.461012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.461020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.461028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.461037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.444 [2024-12-12 10:40:14.461044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.444 [2024-12-12 10:40:14.461053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.445 [2024-12-12 10:40:14.461279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-12 10:40:14.461510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.445 [2024-12-12 10:40:14.461518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b1510 is same with the state(6) to be set 00:26:40.445 [2024-12-12 10:40:14.461528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:40.445 [2024-12-12 10:40:14.461534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:40.445 [2024-12-12 10:40:14.461543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102560 len:8 PRP1 0x0 PRP2 0x0 00:26:40.445 [2024-12-12 10:40:14.461549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.706 [2024-12-12 10:40:14.464574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.706 [2024-12-12 10:40:14.464627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.706 [2024-12-12 10:40:14.465142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.706 [2024-12-12 10:40:14.465159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.706 [2024-12-12 10:40:14.465168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.706 [2024-12-12 10:40:14.465342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.706 [2024-12-12 10:40:14.465516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.706 [2024-12-12 10:40:14.465524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.706 [2024-12-12 10:40:14.465534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.706 [2024-12-12 10:40:14.465543] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.706 [2024-12-12 10:40:14.477873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.706 [2024-12-12 10:40:14.478307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.706 [2024-12-12 10:40:14.478363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.706 [2024-12-12 10:40:14.478388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.706 [2024-12-12 10:40:14.478928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.706 [2024-12-12 10:40:14.479099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.706 [2024-12-12 10:40:14.479112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.706 [2024-12-12 10:40:14.479119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.706 [2024-12-12 10:40:14.479127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.706 [2024-12-12 10:40:14.490718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.706 [2024-12-12 10:40:14.491153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.706 [2024-12-12 10:40:14.491171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.706 [2024-12-12 10:40:14.491179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.706 [2024-12-12 10:40:14.491339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.706 [2024-12-12 10:40:14.491499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.706 [2024-12-12 10:40:14.491509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.706 [2024-12-12 10:40:14.491515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.706 [2024-12-12 10:40:14.491521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.706 [2024-12-12 10:40:14.503492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.706 [2024-12-12 10:40:14.503924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.706 [2024-12-12 10:40:14.503970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.706 [2024-12-12 10:40:14.503996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.706 [2024-12-12 10:40:14.504442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.706 [2024-12-12 10:40:14.504613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.706 [2024-12-12 10:40:14.504623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.706 [2024-12-12 10:40:14.504629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.706 [2024-12-12 10:40:14.504636] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.706 [2024-12-12 10:40:14.516277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.706 [2024-12-12 10:40:14.516693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.706 [2024-12-12 10:40:14.516711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.706 [2024-12-12 10:40:14.516719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.706 [2024-12-12 10:40:14.516879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.706 [2024-12-12 10:40:14.517038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.517048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.517054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.517064] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.707 [2024-12-12 10:40:14.529005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.707 [2024-12-12 10:40:14.529421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.707 [2024-12-12 10:40:14.529468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.707 [2024-12-12 10:40:14.529494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.707 [2024-12-12 10:40:14.530096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.707 [2024-12-12 10:40:14.530453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.530463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.530470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.530476] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.707 [2024-12-12 10:40:14.541746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.707 [2024-12-12 10:40:14.542165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.707 [2024-12-12 10:40:14.542182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.707 [2024-12-12 10:40:14.542190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.707 [2024-12-12 10:40:14.542350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.707 [2024-12-12 10:40:14.542510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.542519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.542526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.542533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.707 [2024-12-12 10:40:14.554594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.707 [2024-12-12 10:40:14.554954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.707 [2024-12-12 10:40:14.554999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.707 [2024-12-12 10:40:14.555024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.707 [2024-12-12 10:40:14.555480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.707 [2024-12-12 10:40:14.555647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.555657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.555663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.555670] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.707 [2024-12-12 10:40:14.567382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.707 [2024-12-12 10:40:14.567778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.707 [2024-12-12 10:40:14.567798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.707 [2024-12-12 10:40:14.567806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.707 [2024-12-12 10:40:14.567966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.707 [2024-12-12 10:40:14.568125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.568134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.568141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.568147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.707 [2024-12-12 10:40:14.580232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.707 [2024-12-12 10:40:14.580650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.707 [2024-12-12 10:40:14.580667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.707 [2024-12-12 10:40:14.580675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.707 [2024-12-12 10:40:14.580834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.707 [2024-12-12 10:40:14.580994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.581003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.581009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.581015] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.707 [2024-12-12 10:40:14.592968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.707 [2024-12-12 10:40:14.593391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.707 [2024-12-12 10:40:14.593408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.707 [2024-12-12 10:40:14.593416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.707 [2024-12-12 10:40:14.593582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.707 [2024-12-12 10:40:14.593744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.593754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.593760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.593766] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.707 [2024-12-12 10:40:14.605714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.707 [2024-12-12 10:40:14.606073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.707 [2024-12-12 10:40:14.606118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.707 [2024-12-12 10:40:14.606143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.707 [2024-12-12 10:40:14.606595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.707 [2024-12-12 10:40:14.606758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.606767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.606773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.606780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.707 [2024-12-12 10:40:14.618559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.707 [2024-12-12 10:40:14.618977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.707 [2024-12-12 10:40:14.619022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.707 [2024-12-12 10:40:14.619047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.707 [2024-12-12 10:40:14.619543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.707 [2024-12-12 10:40:14.619710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.619721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.619727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.619733] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.707 [2024-12-12 10:40:14.631375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.707 [2024-12-12 10:40:14.631726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.707 [2024-12-12 10:40:14.631745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.707 [2024-12-12 10:40:14.631754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.707 [2024-12-12 10:40:14.631914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.707 [2024-12-12 10:40:14.632073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.707 [2024-12-12 10:40:14.632083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.707 [2024-12-12 10:40:14.632090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.707 [2024-12-12 10:40:14.632096] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.708 [2024-12-12 10:40:14.644214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.708 [2024-12-12 10:40:14.644590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.708 [2024-12-12 10:40:14.644636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.708 [2024-12-12 10:40:14.644661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.708 [2024-12-12 10:40:14.645244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.708 [2024-12-12 10:40:14.645700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.708 [2024-12-12 10:40:14.645714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.708 [2024-12-12 10:40:14.645721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.708 [2024-12-12 10:40:14.645727] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.708 [2024-12-12 10:40:14.657072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.708 [2024-12-12 10:40:14.657429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.708 [2024-12-12 10:40:14.657475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.708 [2024-12-12 10:40:14.657499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.708 [2024-12-12 10:40:14.658096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.708 [2024-12-12 10:40:14.658643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.708 [2024-12-12 10:40:14.658653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.708 [2024-12-12 10:40:14.658660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.708 [2024-12-12 10:40:14.658666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.708 [2024-12-12 10:40:14.669825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.708 [2024-12-12 10:40:14.670262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.708 [2024-12-12 10:40:14.670307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.708 [2024-12-12 10:40:14.670331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.708 [2024-12-12 10:40:14.670768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.708 [2024-12-12 10:40:14.670945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.708 [2024-12-12 10:40:14.670954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.708 [2024-12-12 10:40:14.670960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.708 [2024-12-12 10:40:14.670966] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.708 [2024-12-12 10:40:14.682612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.708 [2024-12-12 10:40:14.683033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.708 [2024-12-12 10:40:14.683050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.708 [2024-12-12 10:40:14.683058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.708 [2024-12-12 10:40:14.683217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.708 [2024-12-12 10:40:14.683377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.708 [2024-12-12 10:40:14.683387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.708 [2024-12-12 10:40:14.683393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.708 [2024-12-12 10:40:14.683399] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.708 [2024-12-12 10:40:14.695378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.708 [2024-12-12 10:40:14.695809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.708 [2024-12-12 10:40:14.695852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.708 [2024-12-12 10:40:14.695878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.708 [2024-12-12 10:40:14.696461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.708 [2024-12-12 10:40:14.696656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.708 [2024-12-12 10:40:14.696665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.708 [2024-12-12 10:40:14.696672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.708 [2024-12-12 10:40:14.696678] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.708 [2024-12-12 10:40:14.708175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.708 [2024-12-12 10:40:14.708597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.708 [2024-12-12 10:40:14.708615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.708 [2024-12-12 10:40:14.708623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.708 [2024-12-12 10:40:14.708783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.708 [2024-12-12 10:40:14.708942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.708 [2024-12-12 10:40:14.708952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.708 [2024-12-12 10:40:14.708958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.708 [2024-12-12 10:40:14.708965] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.708 [2024-12-12 10:40:14.720919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.708 [2024-12-12 10:40:14.721358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.708 [2024-12-12 10:40:14.721375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.708 [2024-12-12 10:40:14.721383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.708 [2024-12-12 10:40:14.721553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.708 [2024-12-12 10:40:14.721726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.708 [2024-12-12 10:40:14.721736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.708 [2024-12-12 10:40:14.721743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.708 [2024-12-12 10:40:14.721750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.969 [2024-12-12 10:40:14.733927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.969 [2024-12-12 10:40:14.734328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.969 [2024-12-12 10:40:14.734350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.969 [2024-12-12 10:40:14.734358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.969 [2024-12-12 10:40:14.734533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.969 [2024-12-12 10:40:14.734713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.969 [2024-12-12 10:40:14.734724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.969 [2024-12-12 10:40:14.734732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.969 [2024-12-12 10:40:14.734739] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.969 [2024-12-12 10:40:14.746923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.969 [2024-12-12 10:40:14.747261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.969 [2024-12-12 10:40:14.747280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.969 [2024-12-12 10:40:14.747288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.969 [2024-12-12 10:40:14.747463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.969 [2024-12-12 10:40:14.747645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.969 [2024-12-12 10:40:14.747657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.969 [2024-12-12 10:40:14.747664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.969 [2024-12-12 10:40:14.747671] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.969 [2024-12-12 10:40:14.759992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.969 [2024-12-12 10:40:14.760421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.969 [2024-12-12 10:40:14.760440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.969 [2024-12-12 10:40:14.760448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.969 [2024-12-12 10:40:14.760626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.969 [2024-12-12 10:40:14.760801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.969 [2024-12-12 10:40:14.760811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.969 [2024-12-12 10:40:14.760819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.969 [2024-12-12 10:40:14.760825] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.969 [2024-12-12 10:40:14.773013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.969 [2024-12-12 10:40:14.773410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.969 [2024-12-12 10:40:14.773428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.969 [2024-12-12 10:40:14.773436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.969 [2024-12-12 10:40:14.773612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.969 [2024-12-12 10:40:14.773783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.969 [2024-12-12 10:40:14.773792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.969 [2024-12-12 10:40:14.773799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.969 [2024-12-12 10:40:14.773806] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.969 [2024-12-12 10:40:14.786047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.969 [2024-12-12 10:40:14.786458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.969 [2024-12-12 10:40:14.786476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.969 [2024-12-12 10:40:14.786485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.969 [2024-12-12 10:40:14.786664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.969 [2024-12-12 10:40:14.786851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.969 [2024-12-12 10:40:14.786862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.969 [2024-12-12 10:40:14.786869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.969 [2024-12-12 10:40:14.786875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.969 [2024-12-12 10:40:14.798928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.969 [2024-12-12 10:40:14.799285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.799302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.799310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.799470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.799651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.799662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.799669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.799675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.970 [2024-12-12 10:40:14.811976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.970 [2024-12-12 10:40:14.812338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.812356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.812364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.812538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.812717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.812728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.812738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.812746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.970 [2024-12-12 10:40:14.825112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.970 [2024-12-12 10:40:14.825474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.825492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.825501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.825692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.825884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.825894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.825901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.825908] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.970 [2024-12-12 10:40:14.838096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.970 [2024-12-12 10:40:14.838497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.838514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.838522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.838717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.838901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.838912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.838919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.838926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.970 [2024-12-12 10:40:14.851204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.970 [2024-12-12 10:40:14.851637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.851655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.851664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.851837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.852010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.852020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.852027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.852034] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.970 [2024-12-12 10:40:14.864351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.970 [2024-12-12 10:40:14.864771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.864790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.864798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.864983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.865168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.865178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.865185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.865193] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.970 [2024-12-12 10:40:14.877658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.970 [2024-12-12 10:40:14.878080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.878099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.878108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.878292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.878476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.878486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.878493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.878500] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.970 [2024-12-12 10:40:14.890817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.970 [2024-12-12 10:40:14.891236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.891254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.891262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.891446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.891637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.891648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.891656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.891662] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.970 9692.67 IOPS, 37.86 MiB/s [2024-12-12T09:40:14.993Z] [2024-12-12 10:40:14.904054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.970 [2024-12-12 10:40:14.904499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.904521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.904530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.904721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.904907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.904917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.904925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.904932] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.970 [2024-12-12 10:40:14.917216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.970 [2024-12-12 10:40:14.917654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.970 [2024-12-12 10:40:14.917674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.970 [2024-12-12 10:40:14.917683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.970 [2024-12-12 10:40:14.917873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.970 [2024-12-12 10:40:14.918048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.970 [2024-12-12 10:40:14.918058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.970 [2024-12-12 10:40:14.918064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.970 [2024-12-12 10:40:14.918071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.971 [2024-12-12 10:40:14.930447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.971 [2024-12-12 10:40:14.930890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.971 [2024-12-12 10:40:14.930909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.971 [2024-12-12 10:40:14.930918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.971 [2024-12-12 10:40:14.931102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.971 [2024-12-12 10:40:14.931286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.971 [2024-12-12 10:40:14.931297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.971 [2024-12-12 10:40:14.931304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.971 [2024-12-12 10:40:14.931311] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.971 [2024-12-12 10:40:14.943742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.971 [2024-12-12 10:40:14.944074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.971 [2024-12-12 10:40:14.944092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.971 [2024-12-12 10:40:14.944101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.971 [2024-12-12 10:40:14.944288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.971 [2024-12-12 10:40:14.944473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.971 [2024-12-12 10:40:14.944483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.971 [2024-12-12 10:40:14.944491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.971 [2024-12-12 10:40:14.944498] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.971 [2024-12-12 10:40:14.956990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.971 [2024-12-12 10:40:14.957346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.971 [2024-12-12 10:40:14.957364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.971 [2024-12-12 10:40:14.957373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.971 [2024-12-12 10:40:14.957557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.971 [2024-12-12 10:40:14.957745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.971 [2024-12-12 10:40:14.957756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.971 [2024-12-12 10:40:14.957764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.971 [2024-12-12 10:40:14.957771] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.971 [2024-12-12 10:40:14.970052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.971 [2024-12-12 10:40:14.970485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.971 [2024-12-12 10:40:14.970503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.971 [2024-12-12 10:40:14.970511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.971 [2024-12-12 10:40:14.970690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.971 [2024-12-12 10:40:14.970864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.971 [2024-12-12 10:40:14.970874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.971 [2024-12-12 10:40:14.970881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.971 [2024-12-12 10:40:14.970888] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.971 [2024-12-12 10:40:14.983095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.971 [2024-12-12 10:40:14.983415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.971 [2024-12-12 10:40:14.983433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:40.971 [2024-12-12 10:40:14.983441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:40.971 [2024-12-12 10:40:14.983620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:40.971 [2024-12-12 10:40:14.983794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.971 [2024-12-12 10:40:14.983804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.971 [2024-12-12 10:40:14.983815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.971 [2024-12-12 10:40:14.983822] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.233 [2024-12-12 10:40:14.996200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.233 [2024-12-12 10:40:14.996495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.233 [2024-12-12 10:40:14.996513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.233 [2024-12-12 10:40:14.996521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.233 [2024-12-12 10:40:14.996700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.233 [2024-12-12 10:40:14.996874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.233 [2024-12-12 10:40:14.996884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.233 [2024-12-12 10:40:14.996891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.233 [2024-12-12 10:40:14.996898] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.234 [2024-12-12 10:40:15.009089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.234 [2024-12-12 10:40:15.009380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.234 [2024-12-12 10:40:15.009398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.234 [2024-12-12 10:40:15.009405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.234 [2024-12-12 10:40:15.009579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.234 [2024-12-12 10:40:15.009748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.234 [2024-12-12 10:40:15.009759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.234 [2024-12-12 10:40:15.009766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.234 [2024-12-12 10:40:15.009772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.234 [2024-12-12 10:40:15.021976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.234 [2024-12-12 10:40:15.022298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.234 [2024-12-12 10:40:15.022316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.234 [2024-12-12 10:40:15.022323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.234 [2024-12-12 10:40:15.022482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.234 [2024-12-12 10:40:15.022647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.234 [2024-12-12 10:40:15.022657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.234 [2024-12-12 10:40:15.022664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.234 [2024-12-12 10:40:15.022670] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.234 [2024-12-12 10:40:15.034873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.234 [2024-12-12 10:40:15.035248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.234 [2024-12-12 10:40:15.035265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.234 [2024-12-12 10:40:15.035272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.234 [2024-12-12 10:40:15.035433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.234 [2024-12-12 10:40:15.035598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.234 [2024-12-12 10:40:15.035608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.234 [2024-12-12 10:40:15.035615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.234 [2024-12-12 10:40:15.035621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.234 [2024-12-12 10:40:15.047665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.234 [2024-12-12 10:40:15.048008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.234 [2024-12-12 10:40:15.048025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.234 [2024-12-12 10:40:15.048033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.234 [2024-12-12 10:40:15.048192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.234 [2024-12-12 10:40:15.048352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.234 [2024-12-12 10:40:15.048361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.234 [2024-12-12 10:40:15.048368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.234 [2024-12-12 10:40:15.048374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.234 [2024-12-12 10:40:15.060478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.234 [2024-12-12 10:40:15.060813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.234 [2024-12-12 10:40:15.060858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.234 [2024-12-12 10:40:15.060883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.234 [2024-12-12 10:40:15.061407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.234 [2024-12-12 10:40:15.061574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.234 [2024-12-12 10:40:15.061584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.234 [2024-12-12 10:40:15.061591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.234 [2024-12-12 10:40:15.061597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.234 [2024-12-12 10:40:15.073337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.234 [2024-12-12 10:40:15.073680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.234 [2024-12-12 10:40:15.073698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.234 [2024-12-12 10:40:15.073709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.234 [2024-12-12 10:40:15.073869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.234 [2024-12-12 10:40:15.074029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.234 [2024-12-12 10:40:15.074038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.234 [2024-12-12 10:40:15.074045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.234 [2024-12-12 10:40:15.074051] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.234 [2024-12-12 10:40:15.086102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.234 [2024-12-12 10:40:15.086432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.234 [2024-12-12 10:40:15.086449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.234 [2024-12-12 10:40:15.086457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.234 [2024-12-12 10:40:15.086621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.234 [2024-12-12 10:40:15.086781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.234 [2024-12-12 10:40:15.086791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.234 [2024-12-12 10:40:15.086798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.234 [2024-12-12 10:40:15.086804] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.234 [2024-12-12 10:40:15.098926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.234 [2024-12-12 10:40:15.099326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.234 [2024-12-12 10:40:15.099343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.234 [2024-12-12 10:40:15.099351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.234 [2024-12-12 10:40:15.099510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.234 [2024-12-12 10:40:15.099683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.234 [2024-12-12 10:40:15.099693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.234 [2024-12-12 10:40:15.099700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.234 [2024-12-12 10:40:15.099707] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.234 [2024-12-12 10:40:15.111805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.234 [2024-12-12 10:40:15.112081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.234 [2024-12-12 10:40:15.112098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.234 [2024-12-12 10:40:15.112107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.234 [2024-12-12 10:40:15.112266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.234 [2024-12-12 10:40:15.112429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.234 [2024-12-12 10:40:15.112439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.234 [2024-12-12 10:40:15.112445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.112452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.124650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.235 [2024-12-12 10:40:15.124905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.235 [2024-12-12 10:40:15.124922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.235 [2024-12-12 10:40:15.124930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.235 [2024-12-12 10:40:15.125089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.235 [2024-12-12 10:40:15.125248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.235 [2024-12-12 10:40:15.125258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.235 [2024-12-12 10:40:15.125264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.125270] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.137457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.235 [2024-12-12 10:40:15.137727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.235 [2024-12-12 10:40:15.137743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.235 [2024-12-12 10:40:15.137752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.235 [2024-12-12 10:40:15.137912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.235 [2024-12-12 10:40:15.138072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.235 [2024-12-12 10:40:15.138081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.235 [2024-12-12 10:40:15.138088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.138094] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.150270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.235 [2024-12-12 10:40:15.150582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.235 [2024-12-12 10:40:15.150600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.235 [2024-12-12 10:40:15.150608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.235 [2024-12-12 10:40:15.150768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.235 [2024-12-12 10:40:15.150927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.235 [2024-12-12 10:40:15.150936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.235 [2024-12-12 10:40:15.150946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.150953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.163241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.235 [2024-12-12 10:40:15.163596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.235 [2024-12-12 10:40:15.163614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.235 [2024-12-12 10:40:15.163622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.235 [2024-12-12 10:40:15.163799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.235 [2024-12-12 10:40:15.163958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.235 [2024-12-12 10:40:15.163968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.235 [2024-12-12 10:40:15.163974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.163980] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.176154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.235 [2024-12-12 10:40:15.176587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.235 [2024-12-12 10:40:15.176633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.235 [2024-12-12 10:40:15.176658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.235 [2024-12-12 10:40:15.177033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.235 [2024-12-12 10:40:15.177194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.235 [2024-12-12 10:40:15.177204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.235 [2024-12-12 10:40:15.177210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.177216] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.189002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.235 [2024-12-12 10:40:15.189353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.235 [2024-12-12 10:40:15.189371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.235 [2024-12-12 10:40:15.189379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.235 [2024-12-12 10:40:15.189539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.235 [2024-12-12 10:40:15.189727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.235 [2024-12-12 10:40:15.189738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.235 [2024-12-12 10:40:15.189745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.189751] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.201749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.235 [2024-12-12 10:40:15.202158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.235 [2024-12-12 10:40:15.202205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.235 [2024-12-12 10:40:15.202231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.235 [2024-12-12 10:40:15.202832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.235 [2024-12-12 10:40:15.203258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.235 [2024-12-12 10:40:15.203267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.235 [2024-12-12 10:40:15.203274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.203280] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.214614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.235 [2024-12-12 10:40:15.215045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.235 [2024-12-12 10:40:15.215095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.235 [2024-12-12 10:40:15.215121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.235 [2024-12-12 10:40:15.215718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.235 [2024-12-12 10:40:15.216211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.235 [2024-12-12 10:40:15.216221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.235 [2024-12-12 10:40:15.216227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.216233] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.227420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.235 [2024-12-12 10:40:15.227837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.235 [2024-12-12 10:40:15.227856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.235 [2024-12-12 10:40:15.227865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.235 [2024-12-12 10:40:15.228033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.235 [2024-12-12 10:40:15.228202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.235 [2024-12-12 10:40:15.228211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.235 [2024-12-12 10:40:15.228218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.235 [2024-12-12 10:40:15.228224] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.235 [2024-12-12 10:40:15.240406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.236 [2024-12-12 10:40:15.240832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.236 [2024-12-12 10:40:15.240850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.236 [2024-12-12 10:40:15.240861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.236 [2024-12-12 10:40:15.241030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.236 [2024-12-12 10:40:15.241198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.236 [2024-12-12 10:40:15.241209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.236 [2024-12-12 10:40:15.241215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.236 [2024-12-12 10:40:15.241221] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.236 [2024-12-12 10:40:15.253516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.236 [2024-12-12 10:40:15.253931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.496 [2024-12-12 10:40:15.253949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.496 [2024-12-12 10:40:15.253957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.496 [2024-12-12 10:40:15.254130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.496 [2024-12-12 10:40:15.254303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.496 [2024-12-12 10:40:15.254313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.496 [2024-12-12 10:40:15.254319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.496 [2024-12-12 10:40:15.254326] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.496 [2024-12-12 10:40:15.266459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.496 [2024-12-12 10:40:15.266744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.496 [2024-12-12 10:40:15.266791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.496 [2024-12-12 10:40:15.266816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.496 [2024-12-12 10:40:15.267378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.496 [2024-12-12 10:40:15.267539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.496 [2024-12-12 10:40:15.267549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.496 [2024-12-12 10:40:15.267555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.496 [2024-12-12 10:40:15.267562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.496 [2024-12-12 10:40:15.279342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.496 [2024-12-12 10:40:15.279664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.496 [2024-12-12 10:40:15.279682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.496 [2024-12-12 10:40:15.279690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.496 [2024-12-12 10:40:15.279850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.496 [2024-12-12 10:40:15.280012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.496 [2024-12-12 10:40:15.280023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.496 [2024-12-12 10:40:15.280029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.496 [2024-12-12 10:40:15.280035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.496 [2024-12-12 10:40:15.292163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.497 [2024-12-12 10:40:15.292590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.497 [2024-12-12 10:40:15.292645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.497 [2024-12-12 10:40:15.292671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.497 [2024-12-12 10:40:15.293120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.497 [2024-12-12 10:40:15.293281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.497 [2024-12-12 10:40:15.293291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.497 [2024-12-12 10:40:15.293297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.497 [2024-12-12 10:40:15.293303] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.497 [2024-12-12 10:40:15.304942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.497 [2024-12-12 10:40:15.305366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.497 [2024-12-12 10:40:15.305411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.497 [2024-12-12 10:40:15.305436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.497 [2024-12-12 10:40:15.306034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.497 [2024-12-12 10:40:15.306504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.497 [2024-12-12 10:40:15.306522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.497 [2024-12-12 10:40:15.306537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.497 [2024-12-12 10:40:15.306550] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.497 [2024-12-12 10:40:15.319829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.497 [2024-12-12 10:40:15.320337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.497 [2024-12-12 10:40:15.320359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.497 [2024-12-12 10:40:15.320370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.497 [2024-12-12 10:40:15.320632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.497 [2024-12-12 10:40:15.320889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.497 [2024-12-12 10:40:15.320902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.497 [2024-12-12 10:40:15.320915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.497 [2024-12-12 10:40:15.320926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.497 [2024-12-12 10:40:15.332769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.497 [2024-12-12 10:40:15.333195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.497 [2024-12-12 10:40:15.333213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.497 [2024-12-12 10:40:15.333220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.497 [2024-12-12 10:40:15.333390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.497 [2024-12-12 10:40:15.333558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.497 [2024-12-12 10:40:15.333574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.497 [2024-12-12 10:40:15.333581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.497 [2024-12-12 10:40:15.333589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.497 [2024-12-12 10:40:15.345810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.497 [2024-12-12 10:40:15.346237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.497 [2024-12-12 10:40:15.346290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.497 [2024-12-12 10:40:15.346315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.497 [2024-12-12 10:40:15.346912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.497 [2024-12-12 10:40:15.347406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.497 [2024-12-12 10:40:15.347416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.497 [2024-12-12 10:40:15.347423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.497 [2024-12-12 10:40:15.347429] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.497 [2024-12-12 10:40:15.358555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.497 [2024-12-12 10:40:15.358983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.497 [2024-12-12 10:40:15.359030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.497 [2024-12-12 10:40:15.359055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.497 [2024-12-12 10:40:15.359474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.497 [2024-12-12 10:40:15.359657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.497 [2024-12-12 10:40:15.359666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.497 [2024-12-12 10:40:15.359673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.497 [2024-12-12 10:40:15.359679] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.497 [2024-12-12 10:40:15.371372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.497 [2024-12-12 10:40:15.371785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.497 [2024-12-12 10:40:15.371802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.497 [2024-12-12 10:40:15.371810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.497 [2024-12-12 10:40:15.371969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.497 [2024-12-12 10:40:15.372129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.497 [2024-12-12 10:40:15.372138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.497 [2024-12-12 10:40:15.372145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.497 [2024-12-12 10:40:15.372151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.497 [2024-12-12 10:40:15.384184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.497 [2024-12-12 10:40:15.384601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.497 [2024-12-12 10:40:15.384619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.497 [2024-12-12 10:40:15.384627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.497 [2024-12-12 10:40:15.384787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.497 [2024-12-12 10:40:15.384947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.497 [2024-12-12 10:40:15.384956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.497 [2024-12-12 10:40:15.384963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.497 [2024-12-12 10:40:15.384969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.497 [2024-12-12 10:40:15.396954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.497 [2024-12-12 10:40:15.397369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.497 [2024-12-12 10:40:15.397386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.497 [2024-12-12 10:40:15.397394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.498 [2024-12-12 10:40:15.397554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.498 [2024-12-12 10:40:15.397750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.498 [2024-12-12 10:40:15.397762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.498 [2024-12-12 10:40:15.397769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.498 [2024-12-12 10:40:15.397775] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.498 [2024-12-12 10:40:15.409748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.498 [2024-12-12 10:40:15.410169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.498 [2024-12-12 10:40:15.410187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.498 [2024-12-12 10:40:15.410198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.498 [2024-12-12 10:40:15.410367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.498 [2024-12-12 10:40:15.410535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.498 [2024-12-12 10:40:15.410545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.498 [2024-12-12 10:40:15.410552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.498 [2024-12-12 10:40:15.410558] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.498 [2024-12-12 10:40:15.422686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.498 [2024-12-12 10:40:15.423109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.498 [2024-12-12 10:40:15.423127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.498 [2024-12-12 10:40:15.423135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.498 [2024-12-12 10:40:15.423303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.498 [2024-12-12 10:40:15.423472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.498 [2024-12-12 10:40:15.423482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.498 [2024-12-12 10:40:15.423488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.498 [2024-12-12 10:40:15.423494] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.498 [2024-12-12 10:40:15.435660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.498 [2024-12-12 10:40:15.436062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.498 [2024-12-12 10:40:15.436080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.498 [2024-12-12 10:40:15.436089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.498 [2024-12-12 10:40:15.436258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.498 [2024-12-12 10:40:15.436426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.498 [2024-12-12 10:40:15.436436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.498 [2024-12-12 10:40:15.436442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.498 [2024-12-12 10:40:15.436449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.498 [2024-12-12 10:40:15.448388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.498 [2024-12-12 10:40:15.448733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.498 [2024-12-12 10:40:15.448750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.498 [2024-12-12 10:40:15.448758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.498 [2024-12-12 10:40:15.448917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.498 [2024-12-12 10:40:15.449080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.498 [2024-12-12 10:40:15.449089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.498 [2024-12-12 10:40:15.449096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.498 [2024-12-12 10:40:15.449102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.498 [2024-12-12 10:40:15.461217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.498 [2024-12-12 10:40:15.461646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.498 [2024-12-12 10:40:15.461690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.498 [2024-12-12 10:40:15.461713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.498 [2024-12-12 10:40:15.462297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.498 [2024-12-12 10:40:15.462729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.498 [2024-12-12 10:40:15.462740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.498 [2024-12-12 10:40:15.462747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.498 [2024-12-12 10:40:15.462753] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.498 [2024-12-12 10:40:15.474055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.498 [2024-12-12 10:40:15.474462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.498 [2024-12-12 10:40:15.474480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.498 [2024-12-12 10:40:15.474487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.498 [2024-12-12 10:40:15.474671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.498 [2024-12-12 10:40:15.474840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.498 [2024-12-12 10:40:15.474850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.498 [2024-12-12 10:40:15.474857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.498 [2024-12-12 10:40:15.474863] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.498 [2024-12-12 10:40:15.486779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.498 [2024-12-12 10:40:15.487213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.498 [2024-12-12 10:40:15.487231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.498 [2024-12-12 10:40:15.487239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.498 [2024-12-12 10:40:15.487407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.498 [2024-12-12 10:40:15.487581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.498 [2024-12-12 10:40:15.487591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.498 [2024-12-12 10:40:15.487620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.498 [2024-12-12 10:40:15.487628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.498 [2024-12-12 10:40:15.499801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.498 [2024-12-12 10:40:15.500219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.498 [2024-12-12 10:40:15.500237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.498 [2024-12-12 10:40:15.500244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.498 [2024-12-12 10:40:15.500414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.498 [2024-12-12 10:40:15.500588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.498 [2024-12-12 10:40:15.500599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.498 [2024-12-12 10:40:15.500606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.498 [2024-12-12 10:40:15.500613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.498 [2024-12-12 10:40:15.512795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.498 [2024-12-12 10:40:15.513211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.498 [2024-12-12 10:40:15.513229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.499 [2024-12-12 10:40:15.513237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.499 [2024-12-12 10:40:15.513405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.499 [2024-12-12 10:40:15.513599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.499 [2024-12-12 10:40:15.513610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.499 [2024-12-12 10:40:15.513617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.499 [2024-12-12 10:40:15.513624] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.759 [2024-12-12 10:40:15.525826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.759 [2024-12-12 10:40:15.526257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-12-12 10:40:15.526274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.759 [2024-12-12 10:40:15.526282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.759 [2024-12-12 10:40:15.526452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.759 [2024-12-12 10:40:15.526629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.759 [2024-12-12 10:40:15.526639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.759 [2024-12-12 10:40:15.526646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.759 [2024-12-12 10:40:15.526652] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.759 [2024-12-12 10:40:15.538610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.759 [2024-12-12 10:40:15.539019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-12-12 10:40:15.539062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.759 [2024-12-12 10:40:15.539087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.759 [2024-12-12 10:40:15.539685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.759 [2024-12-12 10:40:15.539925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.759 [2024-12-12 10:40:15.539936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.759 [2024-12-12 10:40:15.539942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.759 [2024-12-12 10:40:15.539949] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.759 [2024-12-12 10:40:15.551451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.759 [2024-12-12 10:40:15.551858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-12-12 10:40:15.551903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.759 [2024-12-12 10:40:15.551928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.759 [2024-12-12 10:40:15.552473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.759 [2024-12-12 10:40:15.552658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.759 [2024-12-12 10:40:15.552668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.759 [2024-12-12 10:40:15.552676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.759 [2024-12-12 10:40:15.552682] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.759 [2024-12-12 10:40:15.564252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.759 [2024-12-12 10:40:15.564604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-12-12 10:40:15.564622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.759 [2024-12-12 10:40:15.564629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.759 [2024-12-12 10:40:15.564789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.759 [2024-12-12 10:40:15.564950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.759 [2024-12-12 10:40:15.564959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.759 [2024-12-12 10:40:15.564966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.759 [2024-12-12 10:40:15.564972] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.759 [2024-12-12 10:40:15.577033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.759 [2024-12-12 10:40:15.577453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-12-12 10:40:15.577497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.759 [2024-12-12 10:40:15.577529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.759 [2024-12-12 10:40:15.577942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.759 [2024-12-12 10:40:15.578113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.759 [2024-12-12 10:40:15.578122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.759 [2024-12-12 10:40:15.578129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.759 [2024-12-12 10:40:15.578135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.759 [2024-12-12 10:40:15.589765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.759 [2024-12-12 10:40:15.590176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-12-12 10:40:15.590193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.759 [2024-12-12 10:40:15.590201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.759 [2024-12-12 10:40:15.590362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.759 [2024-12-12 10:40:15.590521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.759 [2024-12-12 10:40:15.590531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.759 [2024-12-12 10:40:15.590537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.759 [2024-12-12 10:40:15.590543] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.759 [2024-12-12 10:40:15.602522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.759 [2024-12-12 10:40:15.602950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-12-12 10:40:15.602996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.759 [2024-12-12 10:40:15.603021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.759 [2024-12-12 10:40:15.603617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.759 [2024-12-12 10:40:15.604105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.759 [2024-12-12 10:40:15.604115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.759 [2024-12-12 10:40:15.604121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.759 [2024-12-12 10:40:15.604127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.759 [2024-12-12 10:40:15.615303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.760 [2024-12-12 10:40:15.615719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-12-12 10:40:15.615736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.760 [2024-12-12 10:40:15.615743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.760 [2024-12-12 10:40:15.615903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.760 [2024-12-12 10:40:15.616065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.760 [2024-12-12 10:40:15.616075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.760 [2024-12-12 10:40:15.616082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.760 [2024-12-12 10:40:15.616088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.760 [2024-12-12 10:40:15.628108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.760 [2024-12-12 10:40:15.628534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-12-12 10:40:15.628588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.760 [2024-12-12 10:40:15.628614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.760 [2024-12-12 10:40:15.629055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.760 [2024-12-12 10:40:15.629216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.760 [2024-12-12 10:40:15.629238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.760 [2024-12-12 10:40:15.629254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.760 [2024-12-12 10:40:15.629267] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.760 [2024-12-12 10:40:15.642829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.760 [2024-12-12 10:40:15.643356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-12-12 10:40:15.643401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.760 [2024-12-12 10:40:15.643425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.760 [2024-12-12 10:40:15.643901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.760 [2024-12-12 10:40:15.644158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.760 [2024-12-12 10:40:15.644171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.760 [2024-12-12 10:40:15.644181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.760 [2024-12-12 10:40:15.644191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.760 [2024-12-12 10:40:15.655873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.760 [2024-12-12 10:40:15.656287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-12-12 10:40:15.656304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.760 [2024-12-12 10:40:15.656313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.760 [2024-12-12 10:40:15.656481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.760 [2024-12-12 10:40:15.656656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.760 [2024-12-12 10:40:15.656666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.760 [2024-12-12 10:40:15.656673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.760 [2024-12-12 10:40:15.656684] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.760 [2024-12-12 10:40:15.668799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.760 [2024-12-12 10:40:15.669158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-12-12 10:40:15.669175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.760 [2024-12-12 10:40:15.669184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.760 [2024-12-12 10:40:15.669352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.760 [2024-12-12 10:40:15.669521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.760 [2024-12-12 10:40:15.669531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.760 [2024-12-12 10:40:15.669538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.760 [2024-12-12 10:40:15.669544] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.760 [2024-12-12 10:40:15.681682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.760 [2024-12-12 10:40:15.682146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-12-12 10:40:15.682162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.760 [2024-12-12 10:40:15.682170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.760 [2024-12-12 10:40:15.682331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.760 [2024-12-12 10:40:15.682490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.760 [2024-12-12 10:40:15.682499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.760 [2024-12-12 10:40:15.682506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.760 [2024-12-12 10:40:15.682512] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.760 [2024-12-12 10:40:15.694535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.760 [2024-12-12 10:40:15.694885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-12-12 10:40:15.694903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.760 [2024-12-12 10:40:15.694912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.760 [2024-12-12 10:40:15.695071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.760 [2024-12-12 10:40:15.695231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.760 [2024-12-12 10:40:15.695240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.760 [2024-12-12 10:40:15.695246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.760 [2024-12-12 10:40:15.695253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.760 [2024-12-12 10:40:15.707391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.760 [2024-12-12 10:40:15.707816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-12-12 10:40:15.707863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.760 [2024-12-12 10:40:15.707888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.760 [2024-12-12 10:40:15.708409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.760 [2024-12-12 10:40:15.708575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.760 [2024-12-12 10:40:15.708585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.760 [2024-12-12 10:40:15.708607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.760 [2024-12-12 10:40:15.708615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.760 [2024-12-12 10:40:15.720217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.760 [2024-12-12 10:40:15.720633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-12-12 10:40:15.720652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.760 [2024-12-12 10:40:15.720660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.760 [2024-12-12 10:40:15.720829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.760 [2024-12-12 10:40:15.720997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.760 [2024-12-12 10:40:15.721008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.760 [2024-12-12 10:40:15.721015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.760 [2024-12-12 10:40:15.721021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.760 [2024-12-12 10:40:15.733035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.761 [2024-12-12 10:40:15.733463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-12-12 10:40:15.733507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.761 [2024-12-12 10:40:15.733532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.761 [2024-12-12 10:40:15.734071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.761 [2024-12-12 10:40:15.734242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.761 [2024-12-12 10:40:15.734253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.761 [2024-12-12 10:40:15.734260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.761 [2024-12-12 10:40:15.734266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.761 [2024-12-12 10:40:15.745969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.761 [2024-12-12 10:40:15.746374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-12-12 10:40:15.746392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.761 [2024-12-12 10:40:15.746399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.761 [2024-12-12 10:40:15.746579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.761 [2024-12-12 10:40:15.746768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.761 [2024-12-12 10:40:15.746779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.761 [2024-12-12 10:40:15.746785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.761 [2024-12-12 10:40:15.746792] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.761 [2024-12-12 10:40:15.758946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.761 [2024-12-12 10:40:15.759288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-12-12 10:40:15.759333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.761 [2024-12-12 10:40:15.759358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.761 [2024-12-12 10:40:15.759936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.761 [2024-12-12 10:40:15.760328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.761 [2024-12-12 10:40:15.760346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.761 [2024-12-12 10:40:15.760361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.761 [2024-12-12 10:40:15.760375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.761 [2024-12-12 10:40:15.773776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.761 [2024-12-12 10:40:15.774295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-12-12 10:40:15.774318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:41.761 [2024-12-12 10:40:15.774330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:41.761 [2024-12-12 10:40:15.774593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:41.761 [2024-12-12 10:40:15.774851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.761 [2024-12-12 10:40:15.774864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.761 [2024-12-12 10:40:15.774874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.761 [2024-12-12 10:40:15.774883] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.022 [2024-12-12 10:40:15.786774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.022 [2024-12-12 10:40:15.787202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.022 [2024-12-12 10:40:15.787220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.022 [2024-12-12 10:40:15.787228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.022 [2024-12-12 10:40:15.787401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.022 [2024-12-12 10:40:15.787581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.022 [2024-12-12 10:40:15.787596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.022 [2024-12-12 10:40:15.787603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.022 [2024-12-12 10:40:15.787610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.022 [2024-12-12 10:40:15.799645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.022 [2024-12-12 10:40:15.800065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.022 [2024-12-12 10:40:15.800109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.022 [2024-12-12 10:40:15.800134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.022 [2024-12-12 10:40:15.800577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.022 [2024-12-12 10:40:15.800763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.022 [2024-12-12 10:40:15.800773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.022 [2024-12-12 10:40:15.800779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.022 [2024-12-12 10:40:15.800786] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.022 [2024-12-12 10:40:15.814785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.022 [2024-12-12 10:40:15.815259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.022 [2024-12-12 10:40:15.815282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.022 [2024-12-12 10:40:15.815294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.022 [2024-12-12 10:40:15.815549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.022 [2024-12-12 10:40:15.815814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.022 [2024-12-12 10:40:15.815828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.022 [2024-12-12 10:40:15.815838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.022 [2024-12-12 10:40:15.815849] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.022 [2024-12-12 10:40:15.827744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.022 [2024-12-12 10:40:15.828179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.022 [2024-12-12 10:40:15.828223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.022 [2024-12-12 10:40:15.828249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.022 [2024-12-12 10:40:15.828689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.022 [2024-12-12 10:40:15.828860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.022 [2024-12-12 10:40:15.828870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.022 [2024-12-12 10:40:15.828877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.022 [2024-12-12 10:40:15.828888] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.022 [2024-12-12 10:40:15.840682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.022 [2024-12-12 10:40:15.841082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.022 [2024-12-12 10:40:15.841098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.022 [2024-12-12 10:40:15.841106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.022 [2024-12-12 10:40:15.841266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.022 [2024-12-12 10:40:15.841426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.022 [2024-12-12 10:40:15.841436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.022 [2024-12-12 10:40:15.841442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.022 [2024-12-12 10:40:15.841449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.022 [2024-12-12 10:40:15.853592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.022 [2024-12-12 10:40:15.853894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.022 [2024-12-12 10:40:15.853911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.022 [2024-12-12 10:40:15.853918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.022 [2024-12-12 10:40:15.854077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.022 [2024-12-12 10:40:15.854237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.022 [2024-12-12 10:40:15.854246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.022 [2024-12-12 10:40:15.854253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.022 [2024-12-12 10:40:15.854259] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.022 [2024-12-12 10:40:15.866565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.022 [2024-12-12 10:40:15.866984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.022 [2024-12-12 10:40:15.867001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.022 [2024-12-12 10:40:15.867009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.022 [2024-12-12 10:40:15.867177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.022 [2024-12-12 10:40:15.867345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.022 [2024-12-12 10:40:15.867356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.022 [2024-12-12 10:40:15.867363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.022 [2024-12-12 10:40:15.867369] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.022 [2024-12-12 10:40:15.879401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.022 [2024-12-12 10:40:15.879769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.022 [2024-12-12 10:40:15.879786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.879794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.879953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.023 [2024-12-12 10:40:15.880113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.023 [2024-12-12 10:40:15.880122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.023 [2024-12-12 10:40:15.880129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.023 [2024-12-12 10:40:15.880135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.023 [2024-12-12 10:40:15.892182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.023 [2024-12-12 10:40:15.892603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.023 [2024-12-12 10:40:15.892621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.892629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.892788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.023 [2024-12-12 10:40:15.892949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.023 [2024-12-12 10:40:15.892960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.023 [2024-12-12 10:40:15.892966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.023 [2024-12-12 10:40:15.892973] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.023 7269.50 IOPS, 28.40 MiB/s [2024-12-12T09:40:16.046Z] [2024-12-12 10:40:15.905006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.023 [2024-12-12 10:40:15.905343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.023 [2024-12-12 10:40:15.905360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.905368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.905527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.023 [2024-12-12 10:40:15.905712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.023 [2024-12-12 10:40:15.905723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.023 [2024-12-12 10:40:15.905730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.023 [2024-12-12 10:40:15.905737] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.023 [2024-12-12 10:40:15.918039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.023 [2024-12-12 10:40:15.918473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.023 [2024-12-12 10:40:15.918518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.918543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.918975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.023 [2024-12-12 10:40:15.919146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.023 [2024-12-12 10:40:15.919155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.023 [2024-12-12 10:40:15.919162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.023 [2024-12-12 10:40:15.919168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.023 [2024-12-12 10:40:15.930888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.023 [2024-12-12 10:40:15.931224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.023 [2024-12-12 10:40:15.931242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.931249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.931409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.023 [2024-12-12 10:40:15.931568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.023 [2024-12-12 10:40:15.931583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.023 [2024-12-12 10:40:15.931590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.023 [2024-12-12 10:40:15.931612] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.023 [2024-12-12 10:40:15.943812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.023 [2024-12-12 10:40:15.944237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.023 [2024-12-12 10:40:15.944283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.944308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.944907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.023 [2024-12-12 10:40:15.945134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.023 [2024-12-12 10:40:15.945143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.023 [2024-12-12 10:40:15.945149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.023 [2024-12-12 10:40:15.945156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.023 [2024-12-12 10:40:15.956630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.023 [2024-12-12 10:40:15.956976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.023 [2024-12-12 10:40:15.956993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.957000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.957159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.023 [2024-12-12 10:40:15.957318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.023 [2024-12-12 10:40:15.957331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.023 [2024-12-12 10:40:15.957337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.023 [2024-12-12 10:40:15.957344] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.023 [2024-12-12 10:40:15.969365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.023 [2024-12-12 10:40:15.969708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.023 [2024-12-12 10:40:15.969727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.969734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.969893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.023 [2024-12-12 10:40:15.970053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.023 [2024-12-12 10:40:15.970062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.023 [2024-12-12 10:40:15.970069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.023 [2024-12-12 10:40:15.970075] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.023 [2024-12-12 10:40:15.982190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.023 [2024-12-12 10:40:15.982598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.023 [2024-12-12 10:40:15.982616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.982623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.982783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.023 [2024-12-12 10:40:15.982943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.023 [2024-12-12 10:40:15.982953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.023 [2024-12-12 10:40:15.982959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.023 [2024-12-12 10:40:15.982965] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.023 [2024-12-12 10:40:15.994990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.023 [2024-12-12 10:40:15.995405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.023 [2024-12-12 10:40:15.995422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.023 [2024-12-12 10:40:15.995430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.023 [2024-12-12 10:40:15.995596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.024 [2024-12-12 10:40:15.995780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.024 [2024-12-12 10:40:15.995790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.024 [2024-12-12 10:40:15.995797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.024 [2024-12-12 10:40:15.995808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.024 [2024-12-12 10:40:16.007759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.024 [2024-12-12 10:40:16.008169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.024 [2024-12-12 10:40:16.008187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.024 [2024-12-12 10:40:16.008195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.024 [2024-12-12 10:40:16.008364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.024 [2024-12-12 10:40:16.008532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.024 [2024-12-12 10:40:16.008542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.024 [2024-12-12 10:40:16.008549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.024 [2024-12-12 10:40:16.008556] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.024 [2024-12-12 10:40:16.020747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.024 [2024-12-12 10:40:16.021174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.024 [2024-12-12 10:40:16.021223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.024 [2024-12-12 10:40:16.021248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.024 [2024-12-12 10:40:16.021807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.024 [2024-12-12 10:40:16.021978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.024 [2024-12-12 10:40:16.021987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.024 [2024-12-12 10:40:16.021994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.024 [2024-12-12 10:40:16.022001] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.024 [2024-12-12 10:40:16.033705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.024 [2024-12-12 10:40:16.034057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.024 [2024-12-12 10:40:16.034074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.024 [2024-12-12 10:40:16.034082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.024 [2024-12-12 10:40:16.034251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.024 [2024-12-12 10:40:16.034420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.024 [2024-12-12 10:40:16.034430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.024 [2024-12-12 10:40:16.034436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.024 [2024-12-12 10:40:16.034442] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.285 [2024-12-12 10:40:16.046796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.285 [2024-12-12 10:40:16.047236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.285 [2024-12-12 10:40:16.047288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.285 [2024-12-12 10:40:16.047314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.285 [2024-12-12 10:40:16.047803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.285 [2024-12-12 10:40:16.047974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.285 [2024-12-12 10:40:16.047984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.285 [2024-12-12 10:40:16.047991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.285 [2024-12-12 10:40:16.047998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.285 [2024-12-12 10:40:16.059641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.285 [2024-12-12 10:40:16.060060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.285 [2024-12-12 10:40:16.060077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.285 [2024-12-12 10:40:16.060085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.285 [2024-12-12 10:40:16.060243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.285 [2024-12-12 10:40:16.060403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.285 [2024-12-12 10:40:16.060412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.285 [2024-12-12 10:40:16.060419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.285 [2024-12-12 10:40:16.060425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.285 [2024-12-12 10:40:16.072441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.285 [2024-12-12 10:40:16.072781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.285 [2024-12-12 10:40:16.072798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.285 [2024-12-12 10:40:16.072806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.285 [2024-12-12 10:40:16.072965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.285 [2024-12-12 10:40:16.073125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.285 [2024-12-12 10:40:16.073134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.285 [2024-12-12 10:40:16.073141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.285 [2024-12-12 10:40:16.073146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.285 [2024-12-12 10:40:16.085262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.285 [2024-12-12 10:40:16.085654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.285 [2024-12-12 10:40:16.085673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.285 [2024-12-12 10:40:16.085681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.285 [2024-12-12 10:40:16.085844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.285 [2024-12-12 10:40:16.086004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.285 [2024-12-12 10:40:16.086014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.285 [2024-12-12 10:40:16.086020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.285 [2024-12-12 10:40:16.086026] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.285 [2024-12-12 10:40:16.098059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.285 [2024-12-12 10:40:16.098472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.285 [2024-12-12 10:40:16.098489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.285 [2024-12-12 10:40:16.098497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.285 [2024-12-12 10:40:16.098663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.285 [2024-12-12 10:40:16.098823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.285 [2024-12-12 10:40:16.098833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.285 [2024-12-12 10:40:16.098839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.285 [2024-12-12 10:40:16.098845] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.285 [2024-12-12 10:40:16.110913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.285 [2024-12-12 10:40:16.111323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.285 [2024-12-12 10:40:16.111340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.285 [2024-12-12 10:40:16.111348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.285 [2024-12-12 10:40:16.111507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.285 [2024-12-12 10:40:16.111694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.285 [2024-12-12 10:40:16.111704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.285 [2024-12-12 10:40:16.111711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.285 [2024-12-12 10:40:16.111717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.285 [2024-12-12 10:40:16.123719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.285 [2024-12-12 10:40:16.124136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.285 [2024-12-12 10:40:16.124153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.285 [2024-12-12 10:40:16.124161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.285 [2024-12-12 10:40:16.124320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.285 [2024-12-12 10:40:16.124479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.285 [2024-12-12 10:40:16.124494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.285 [2024-12-12 10:40:16.124501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.285 [2024-12-12 10:40:16.124507] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.285 [2024-12-12 10:40:16.136476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.285 [2024-12-12 10:40:16.136898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.285 [2024-12-12 10:40:16.136945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.285 [2024-12-12 10:40:16.136970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.285 [2024-12-12 10:40:16.137426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.285 [2024-12-12 10:40:16.137593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.285 [2024-12-12 10:40:16.137617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.285 [2024-12-12 10:40:16.137624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.285 [2024-12-12 10:40:16.137631] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.285 [2024-12-12 10:40:16.149426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.285 [2024-12-12 10:40:16.149895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.285 [2024-12-12 10:40:16.149941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.285 [2024-12-12 10:40:16.149966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.285 [2024-12-12 10:40:16.150383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.286 [2024-12-12 10:40:16.150553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.286 [2024-12-12 10:40:16.150565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.286 [2024-12-12 10:40:16.150576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.286 [2024-12-12 10:40:16.150583] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.286 [2024-12-12 10:40:16.162199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.286 [2024-12-12 10:40:16.162531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.286 [2024-12-12 10:40:16.162548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.286 [2024-12-12 10:40:16.162555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.286 [2024-12-12 10:40:16.162742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.286 [2024-12-12 10:40:16.162912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.286 [2024-12-12 10:40:16.162921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.286 [2024-12-12 10:40:16.162928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.286 [2024-12-12 10:40:16.162938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.286 [2024-12-12 10:40:16.175110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.286 [2024-12-12 10:40:16.175458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.286 [2024-12-12 10:40:16.175515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.286 [2024-12-12 10:40:16.175539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.286 [2024-12-12 10:40:16.176053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.286 [2024-12-12 10:40:16.176214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.286 [2024-12-12 10:40:16.176223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.286 [2024-12-12 10:40:16.176230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.286 [2024-12-12 10:40:16.176236] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.286 [2024-12-12 10:40:16.188177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.286 [2024-12-12 10:40:16.188589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.286 [2024-12-12 10:40:16.188608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.286 [2024-12-12 10:40:16.188617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.286 [2024-12-12 10:40:16.188791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.286 [2024-12-12 10:40:16.188966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.286 [2024-12-12 10:40:16.188976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.286 [2024-12-12 10:40:16.188984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.286 [2024-12-12 10:40:16.188991] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.286 [2024-12-12 10:40:16.201014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.286 [2024-12-12 10:40:16.201458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.286 [2024-12-12 10:40:16.201503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.286 [2024-12-12 10:40:16.201530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.286 [2024-12-12 10:40:16.202130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.286 [2024-12-12 10:40:16.202590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.286 [2024-12-12 10:40:16.202600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.286 [2024-12-12 10:40:16.202607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.286 [2024-12-12 10:40:16.202613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.286 [2024-12-12 10:40:16.213801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.286 [2024-12-12 10:40:16.214219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.286 [2024-12-12 10:40:16.214274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.286 [2024-12-12 10:40:16.214299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.286 [2024-12-12 10:40:16.214896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.286 [2024-12-12 10:40:16.215360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.286 [2024-12-12 10:40:16.215371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.286 [2024-12-12 10:40:16.215377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.286 [2024-12-12 10:40:16.215383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.286 [2024-12-12 10:40:16.226721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.286 [2024-12-12 10:40:16.226990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.286 [2024-12-12 10:40:16.227008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.286 [2024-12-12 10:40:16.227016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.286 [2024-12-12 10:40:16.227176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.286 [2024-12-12 10:40:16.227336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.286 [2024-12-12 10:40:16.227346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.286 [2024-12-12 10:40:16.227352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.286 [2024-12-12 10:40:16.227358] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.286 [2024-12-12 10:40:16.239568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.286 [2024-12-12 10:40:16.239942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.286 [2024-12-12 10:40:16.239960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.286 [2024-12-12 10:40:16.239968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.286 [2024-12-12 10:40:16.240137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.286 [2024-12-12 10:40:16.240305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.286 [2024-12-12 10:40:16.240315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.286 [2024-12-12 10:40:16.240322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.286 [2024-12-12 10:40:16.240328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.286 [2024-12-12 10:40:16.252528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.286 [2024-12-12 10:40:16.252885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.286 [2024-12-12 10:40:16.252903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.286 [2024-12-12 10:40:16.252910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.286 [2024-12-12 10:40:16.253073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.286 [2024-12-12 10:40:16.253232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.286 [2024-12-12 10:40:16.253242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.286 [2024-12-12 10:40:16.253249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.286 [2024-12-12 10:40:16.253255] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.286 [2024-12-12 10:40:16.265477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.286 [2024-12-12 10:40:16.265826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.286 [2024-12-12 10:40:16.265844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.286 [2024-12-12 10:40:16.265852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.286 [2024-12-12 10:40:16.266025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.287 [2024-12-12 10:40:16.266199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.287 [2024-12-12 10:40:16.266209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.287 [2024-12-12 10:40:16.266216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.287 [2024-12-12 10:40:16.266223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.287 [2024-12-12 10:40:16.278410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.287 [2024-12-12 10:40:16.278751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.287 [2024-12-12 10:40:16.278770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.287 [2024-12-12 10:40:16.278778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.287 [2024-12-12 10:40:16.278947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.287 [2024-12-12 10:40:16.279116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.287 [2024-12-12 10:40:16.279126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.287 [2024-12-12 10:40:16.279133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.287 [2024-12-12 10:40:16.279139] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.287 [2024-12-12 10:40:16.291333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.287 [2024-12-12 10:40:16.291714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.287 [2024-12-12 10:40:16.291733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.287 [2024-12-12 10:40:16.291741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.287 [2024-12-12 10:40:16.291910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.287 [2024-12-12 10:40:16.292078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.287 [2024-12-12 10:40:16.292088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.287 [2024-12-12 10:40:16.292099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.287 [2024-12-12 10:40:16.292106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.287 [2024-12-12 10:40:16.304323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.287 [2024-12-12 10:40:16.304721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.287 [2024-12-12 10:40:16.304761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.287 [2024-12-12 10:40:16.304788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.287 [2024-12-12 10:40:16.305371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.287 [2024-12-12 10:40:16.305607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.287 [2024-12-12 10:40:16.305618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.287 [2024-12-12 10:40:16.305626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.287 [2024-12-12 10:40:16.305632] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.548 [2024-12-12 10:40:16.317151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.548 [2024-12-12 10:40:16.317503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.548 [2024-12-12 10:40:16.317521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.548 [2024-12-12 10:40:16.317529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.548 [2024-12-12 10:40:16.317704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.548 [2024-12-12 10:40:16.317880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.548 [2024-12-12 10:40:16.317890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.548 [2024-12-12 10:40:16.317897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.548 [2024-12-12 10:40:16.317903] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.548 [2024-12-12 10:40:16.329993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.548 [2024-12-12 10:40:16.330327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.548 [2024-12-12 10:40:16.330345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.548 [2024-12-12 10:40:16.330354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.548 [2024-12-12 10:40:16.330523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.548 [2024-12-12 10:40:16.330698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.548 [2024-12-12 10:40:16.330709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.548 [2024-12-12 10:40:16.330716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.548 [2024-12-12 10:40:16.330722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.548 [2024-12-12 10:40:16.342961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.548 [2024-12-12 10:40:16.343406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.548 [2024-12-12 10:40:16.343424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.548 [2024-12-12 10:40:16.343432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.548 [2024-12-12 10:40:16.343606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.548 [2024-12-12 10:40:16.343775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.548 [2024-12-12 10:40:16.343786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.548 [2024-12-12 10:40:16.343793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.548 [2024-12-12 10:40:16.343799] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.548 [2024-12-12 10:40:16.355766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.548 [2024-12-12 10:40:16.356082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.548 [2024-12-12 10:40:16.356099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.548 [2024-12-12 10:40:16.356106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.548 [2024-12-12 10:40:16.356266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.548 [2024-12-12 10:40:16.356425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.548 [2024-12-12 10:40:16.356436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.548 [2024-12-12 10:40:16.356443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.548 [2024-12-12 10:40:16.356449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.548 [2024-12-12 10:40:16.368850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.548 [2024-12-12 10:40:16.369303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.548 [2024-12-12 10:40:16.369321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.548 [2024-12-12 10:40:16.369329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.548 [2024-12-12 10:40:16.369497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.548 [2024-12-12 10:40:16.369686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.548 [2024-12-12 10:40:16.369697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.548 [2024-12-12 10:40:16.369704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.548 [2024-12-12 10:40:16.369711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.548 [2024-12-12 10:40:16.381894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.548 [2024-12-12 10:40:16.382253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.548 [2024-12-12 10:40:16.382273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.548 [2024-12-12 10:40:16.382282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.548 [2024-12-12 10:40:16.382455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.548 [2024-12-12 10:40:16.382634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.548 [2024-12-12 10:40:16.382645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.548 [2024-12-12 10:40:16.382652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.548 [2024-12-12 10:40:16.382659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.548 [2024-12-12 10:40:16.394997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.548 [2024-12-12 10:40:16.395352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.548 [2024-12-12 10:40:16.395370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.548 [2024-12-12 10:40:16.395378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.548 [2024-12-12 10:40:16.395551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.548 [2024-12-12 10:40:16.395731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.548 [2024-12-12 10:40:16.395742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.548 [2024-12-12 10:40:16.395750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.548 [2024-12-12 10:40:16.395757] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.548 [2024-12-12 10:40:16.408099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.548 [2024-12-12 10:40:16.408503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.548 [2024-12-12 10:40:16.408522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.548 [2024-12-12 10:40:16.408530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.548 [2024-12-12 10:40:16.408712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.548 [2024-12-12 10:40:16.408886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.548 [2024-12-12 10:40:16.408897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.548 [2024-12-12 10:40:16.408904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.548 [2024-12-12 10:40:16.408910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.548 [2024-12-12 10:40:16.421228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.548 [2024-12-12 10:40:16.421663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.548 [2024-12-12 10:40:16.421683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.548 [2024-12-12 10:40:16.421692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.548 [2024-12-12 10:40:16.421877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.548 [2024-12-12 10:40:16.422066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.548 [2024-12-12 10:40:16.422077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.548 [2024-12-12 10:40:16.422084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.548 [2024-12-12 10:40:16.422091] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.434455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.434902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.434920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.434929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.435113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.549 [2024-12-12 10:40:16.435298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.549 [2024-12-12 10:40:16.435308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.549 [2024-12-12 10:40:16.435317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.549 [2024-12-12 10:40:16.435324] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.447684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.448123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.448142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.448150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.448323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.549 [2024-12-12 10:40:16.448497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.549 [2024-12-12 10:40:16.448507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.549 [2024-12-12 10:40:16.448514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.549 [2024-12-12 10:40:16.448521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.460787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.461130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.461148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.461157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.461330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.549 [2024-12-12 10:40:16.461503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.549 [2024-12-12 10:40:16.461513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.549 [2024-12-12 10:40:16.461524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.549 [2024-12-12 10:40:16.461530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.473872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.474283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.474301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.474309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.474482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.549 [2024-12-12 10:40:16.474664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.549 [2024-12-12 10:40:16.474675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.549 [2024-12-12 10:40:16.474682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.549 [2024-12-12 10:40:16.474689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.487099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.487538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.487557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.487565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.487756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.549 [2024-12-12 10:40:16.487941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.549 [2024-12-12 10:40:16.487952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.549 [2024-12-12 10:40:16.487959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.549 [2024-12-12 10:40:16.487966] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.500183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.500621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.500670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.500695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.501173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.549 [2024-12-12 10:40:16.501348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.549 [2024-12-12 10:40:16.501358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.549 [2024-12-12 10:40:16.501364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.549 [2024-12-12 10:40:16.501371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.513165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.513430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.513448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.513456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.513633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.549 [2024-12-12 10:40:16.513803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.549 [2024-12-12 10:40:16.513813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.549 [2024-12-12 10:40:16.513819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.549 [2024-12-12 10:40:16.513826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.526161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.526562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.526588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.526596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.526770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.549 [2024-12-12 10:40:16.526944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.549 [2024-12-12 10:40:16.526954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.549 [2024-12-12 10:40:16.526961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.549 [2024-12-12 10:40:16.526968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.539154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.539534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.539552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.539559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.539736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.549 [2024-12-12 10:40:16.539906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.549 [2024-12-12 10:40:16.539916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.549 [2024-12-12 10:40:16.539923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.549 [2024-12-12 10:40:16.539930] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.549 [2024-12-12 10:40:16.551946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.549 [2024-12-12 10:40:16.552199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.549 [2024-12-12 10:40:16.552216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.549 [2024-12-12 10:40:16.552228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.549 [2024-12-12 10:40:16.552388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.550 [2024-12-12 10:40:16.552548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.550 [2024-12-12 10:40:16.552557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.550 [2024-12-12 10:40:16.552564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.550 [2024-12-12 10:40:16.552577] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.550 [2024-12-12 10:40:16.564893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.550 [2024-12-12 10:40:16.565180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.550 [2024-12-12 10:40:16.565198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.550 [2024-12-12 10:40:16.565206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.550 [2024-12-12 10:40:16.565378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.550 [2024-12-12 10:40:16.565551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.550 [2024-12-12 10:40:16.565561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.550 [2024-12-12 10:40:16.565574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.550 [2024-12-12 10:40:16.565581] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.811 [2024-12-12 10:40:16.577945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.811 [2024-12-12 10:40:16.578370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.811 [2024-12-12 10:40:16.578388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.811 [2024-12-12 10:40:16.578395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.811 [2024-12-12 10:40:16.578563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.811 [2024-12-12 10:40:16.578739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.811 [2024-12-12 10:40:16.578749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.811 [2024-12-12 10:40:16.578756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.811 [2024-12-12 10:40:16.578762] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.811 [2024-12-12 10:40:16.590669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.811 [2024-12-12 10:40:16.591059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.811 [2024-12-12 10:40:16.591076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.811 [2024-12-12 10:40:16.591084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.811 [2024-12-12 10:40:16.591244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.811 [2024-12-12 10:40:16.591407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.811 [2024-12-12 10:40:16.591416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.811 [2024-12-12 10:40:16.591422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.811 [2024-12-12 10:40:16.591428] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.811 [2024-12-12 10:40:16.603504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.811 [2024-12-12 10:40:16.603867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.811 [2024-12-12 10:40:16.603912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.811 [2024-12-12 10:40:16.603936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.811 [2024-12-12 10:40:16.604423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.811 [2024-12-12 10:40:16.604595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.811 [2024-12-12 10:40:16.604606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.811 [2024-12-12 10:40:16.604614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.811 [2024-12-12 10:40:16.604621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.811 [2024-12-12 10:40:16.616265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.811 [2024-12-12 10:40:16.616614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.811 [2024-12-12 10:40:16.616632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.811 [2024-12-12 10:40:16.616639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.811 [2024-12-12 10:40:16.616799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.811 [2024-12-12 10:40:16.616959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.811 [2024-12-12 10:40:16.616969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.811 [2024-12-12 10:40:16.616975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.811 [2024-12-12 10:40:16.616981] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.811 [2024-12-12 10:40:16.628997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.811 [2024-12-12 10:40:16.629390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.811 [2024-12-12 10:40:16.629407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.811 [2024-12-12 10:40:16.629415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.811 [2024-12-12 10:40:16.629581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.811 [2024-12-12 10:40:16.629763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.811 [2024-12-12 10:40:16.629773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.811 [2024-12-12 10:40:16.629784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.811 [2024-12-12 10:40:16.629791] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.811 [2024-12-12 10:40:16.641835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.811 [2024-12-12 10:40:16.642175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.811 [2024-12-12 10:40:16.642192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.811 [2024-12-12 10:40:16.642200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.811 [2024-12-12 10:40:16.642359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.811 [2024-12-12 10:40:16.642518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.811 [2024-12-12 10:40:16.642528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.811 [2024-12-12 10:40:16.642534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.811 [2024-12-12 10:40:16.642540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.811 [2024-12-12 10:40:16.654611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.811 [2024-12-12 10:40:16.655041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.811 [2024-12-12 10:40:16.655086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.811 [2024-12-12 10:40:16.655111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.811 [2024-12-12 10:40:16.655655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.811 [2024-12-12 10:40:16.655825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.811 [2024-12-12 10:40:16.655835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.811 [2024-12-12 10:40:16.655843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.811 [2024-12-12 10:40:16.655849] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.811 [2024-12-12 10:40:16.667490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.811 [2024-12-12 10:40:16.667815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.811 [2024-12-12 10:40:16.667834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.811 [2024-12-12 10:40:16.667842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.811 [2024-12-12 10:40:16.668001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.811 [2024-12-12 10:40:16.668161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.811 [2024-12-12 10:40:16.668170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.811 [2024-12-12 10:40:16.668177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.811 [2024-12-12 10:40:16.668183] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.811 [2024-12-12 10:40:16.680306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.811 [2024-12-12 10:40:16.680649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.811 [2024-12-12 10:40:16.680667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.811 [2024-12-12 10:40:16.680676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.811 [2024-12-12 10:40:16.680849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.811 [2024-12-12 10:40:16.681008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.811 [2024-12-12 10:40:16.681017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.811 [2024-12-12 10:40:16.681024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.812 [2024-12-12 10:40:16.681030] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.812 [2024-12-12 10:40:16.693219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.812 [2024-12-12 10:40:16.693563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.812 [2024-12-12 10:40:16.693624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.812 [2024-12-12 10:40:16.693650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.812 [2024-12-12 10:40:16.694234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.812 [2024-12-12 10:40:16.694701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.812 [2024-12-12 10:40:16.694711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.812 [2024-12-12 10:40:16.694718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.812 [2024-12-12 10:40:16.694724] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.812 [2024-12-12 10:40:16.706058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.812 [2024-12-12 10:40:16.706408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.812 [2024-12-12 10:40:16.706425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.812 [2024-12-12 10:40:16.706432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.812 [2024-12-12 10:40:16.706613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.812 [2024-12-12 10:40:16.706781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.812 [2024-12-12 10:40:16.706792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.812 [2024-12-12 10:40:16.706798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.812 [2024-12-12 10:40:16.706805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.812 [2024-12-12 10:40:16.718881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.812 [2024-12-12 10:40:16.719300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.812 [2024-12-12 10:40:16.719340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.812 [2024-12-12 10:40:16.719375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.812 [2024-12-12 10:40:16.719974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.812 [2024-12-12 10:40:16.720564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.812 [2024-12-12 10:40:16.720600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.812 [2024-12-12 10:40:16.720623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.812 [2024-12-12 10:40:16.720643] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.812 [2024-12-12 10:40:16.731733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.812 [2024-12-12 10:40:16.732167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.812 [2024-12-12 10:40:16.732212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.812 [2024-12-12 10:40:16.732237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.812 [2024-12-12 10:40:16.732759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.812 [2024-12-12 10:40:16.732930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.812 [2024-12-12 10:40:16.732940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.812 [2024-12-12 10:40:16.732946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.812 [2024-12-12 10:40:16.732953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.812 [2024-12-12 10:40:16.744644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.812 [2024-12-12 10:40:16.745053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.812 [2024-12-12 10:40:16.745070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.812 [2024-12-12 10:40:16.745078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.812 [2024-12-12 10:40:16.745237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.812 [2024-12-12 10:40:16.745398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.812 [2024-12-12 10:40:16.745407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.812 [2024-12-12 10:40:16.745413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.812 [2024-12-12 10:40:16.745419] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.812 [2024-12-12 10:40:16.757481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.812 [2024-12-12 10:40:16.757818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.812 [2024-12-12 10:40:16.757836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.812 [2024-12-12 10:40:16.757844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.812 [2024-12-12 10:40:16.758004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.812 [2024-12-12 10:40:16.758168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.812 [2024-12-12 10:40:16.758178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.812 [2024-12-12 10:40:16.758184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.812 [2024-12-12 10:40:16.758191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.812 [2024-12-12 10:40:16.770312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.812 [2024-12-12 10:40:16.770670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.812 [2024-12-12 10:40:16.770688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.812 [2024-12-12 10:40:16.770695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.812 [2024-12-12 10:40:16.770854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.812 [2024-12-12 10:40:16.771014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.812 [2024-12-12 10:40:16.771024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.812 [2024-12-12 10:40:16.771030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.812 [2024-12-12 10:40:16.771036] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.812 [2024-12-12 10:40:16.783325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.812 [2024-12-12 10:40:16.783757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.812 [2024-12-12 10:40:16.783775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.812 [2024-12-12 10:40:16.783783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.812 [2024-12-12 10:40:16.783958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.812 [2024-12-12 10:40:16.784131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.812 [2024-12-12 10:40:16.784141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.812 [2024-12-12 10:40:16.784148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.812 [2024-12-12 10:40:16.784154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.812 [2024-12-12 10:40:16.796248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.812 [2024-12-12 10:40:16.796683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.812 [2024-12-12 10:40:16.796728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.812 [2024-12-12 10:40:16.796753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.813 [2024-12-12 10:40:16.797336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.813 [2024-12-12 10:40:16.797951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.813 [2024-12-12 10:40:16.797961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.813 [2024-12-12 10:40:16.797971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.813 [2024-12-12 10:40:16.797978] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.813 [2024-12-12 10:40:16.809186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.813 [2024-12-12 10:40:16.809544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.813 [2024-12-12 10:40:16.809561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.813 [2024-12-12 10:40:16.809574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.813 [2024-12-12 10:40:16.809742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.813 [2024-12-12 10:40:16.809911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.813 [2024-12-12 10:40:16.809920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.813 [2024-12-12 10:40:16.809927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.813 [2024-12-12 10:40:16.809933] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.813 [2024-12-12 10:40:16.822014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.813 [2024-12-12 10:40:16.822404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.813 [2024-12-12 10:40:16.822421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:42.813 [2024-12-12 10:40:16.822429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:42.813 [2024-12-12 10:40:16.822594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:42.813 [2024-12-12 10:40:16.822778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.813 [2024-12-12 10:40:16.822787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.813 [2024-12-12 10:40:16.822794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.813 [2024-12-12 10:40:16.822800] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.073 [2024-12-12 10:40:16.835062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.073 [2024-12-12 10:40:16.835398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.073 [2024-12-12 10:40:16.835415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.073 [2024-12-12 10:40:16.835423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.835587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.835772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.074 [2024-12-12 10:40:16.835782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.074 [2024-12-12 10:40:16.835789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.074 [2024-12-12 10:40:16.835795] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.074 [2024-12-12 10:40:16.847799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.074 [2024-12-12 10:40:16.848223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.074 [2024-12-12 10:40:16.848266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.074 [2024-12-12 10:40:16.848290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.848755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.848916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.074 [2024-12-12 10:40:16.848926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.074 [2024-12-12 10:40:16.848932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.074 [2024-12-12 10:40:16.848938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.074 [2024-12-12 10:40:16.860574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.074 [2024-12-12 10:40:16.860988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.074 [2024-12-12 10:40:16.861034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.074 [2024-12-12 10:40:16.861059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.861654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.862228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.074 [2024-12-12 10:40:16.862237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.074 [2024-12-12 10:40:16.862243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.074 [2024-12-12 10:40:16.862249] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.074 [2024-12-12 10:40:16.873335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.074 [2024-12-12 10:40:16.873726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.074 [2024-12-12 10:40:16.873744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.074 [2024-12-12 10:40:16.873752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.873911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.874071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.074 [2024-12-12 10:40:16.874080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.074 [2024-12-12 10:40:16.874086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.074 [2024-12-12 10:40:16.874092] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.074 [2024-12-12 10:40:16.886071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.074 [2024-12-12 10:40:16.886463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.074 [2024-12-12 10:40:16.886481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.074 [2024-12-12 10:40:16.886491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.886675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.886844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.074 [2024-12-12 10:40:16.886852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.074 [2024-12-12 10:40:16.886859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.074 [2024-12-12 10:40:16.886865] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.074 [2024-12-12 10:40:16.898947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.074 [2024-12-12 10:40:16.899350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.074 [2024-12-12 10:40:16.899368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.074 [2024-12-12 10:40:16.899376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.899545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.899720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.074 [2024-12-12 10:40:16.899731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.074 [2024-12-12 10:40:16.899738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.074 [2024-12-12 10:40:16.899745] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.074 5815.60 IOPS, 22.72 MiB/s [2024-12-12T09:40:17.097Z] [2024-12-12 10:40:16.911811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.074 [2024-12-12 10:40:16.912235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.074 [2024-12-12 10:40:16.912253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.074 [2024-12-12 10:40:16.912261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.912430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.912608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.074 [2024-12-12 10:40:16.912618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.074 [2024-12-12 10:40:16.912626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.074 [2024-12-12 10:40:16.912633] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.074 [2024-12-12 10:40:16.924623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.074 [2024-12-12 10:40:16.925039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.074 [2024-12-12 10:40:16.925057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.074 [2024-12-12 10:40:16.925065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.925226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.925390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.074 [2024-12-12 10:40:16.925399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.074 [2024-12-12 10:40:16.925406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.074 [2024-12-12 10:40:16.925412] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.074 [2024-12-12 10:40:16.937561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.074 [2024-12-12 10:40:16.937967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.074 [2024-12-12 10:40:16.937984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.074 [2024-12-12 10:40:16.937992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.938160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.938329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.074 [2024-12-12 10:40:16.938338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.074 [2024-12-12 10:40:16.938345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.074 [2024-12-12 10:40:16.938351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.074 [2024-12-12 10:40:16.950619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.074 [2024-12-12 10:40:16.951060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.074 [2024-12-12 10:40:16.951106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.074 [2024-12-12 10:40:16.951130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.074 [2024-12-12 10:40:16.951727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.074 [2024-12-12 10:40:16.952314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:16.952339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:16.952362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:16.952392] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:16.965690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.075 [2024-12-12 10:40:16.966123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.075 [2024-12-12 10:40:16.966147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.075 [2024-12-12 10:40:16.966158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.075 [2024-12-12 10:40:16.966412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.075 [2024-12-12 10:40:16.966675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:16.966689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:16.966704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:16.966714] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:16.978762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.075 [2024-12-12 10:40:16.979192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.075 [2024-12-12 10:40:16.979210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.075 [2024-12-12 10:40:16.979218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.075 [2024-12-12 10:40:16.979392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.075 [2024-12-12 10:40:16.979568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:16.979582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:16.979590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:16.979597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:16.991603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.075 [2024-12-12 10:40:16.992031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.075 [2024-12-12 10:40:16.992077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.075 [2024-12-12 10:40:16.992103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.075 [2024-12-12 10:40:16.992702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.075 [2024-12-12 10:40:16.993289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:16.993314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:16.993339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:16.993346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:17.004435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.075 [2024-12-12 10:40:17.004780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.075 [2024-12-12 10:40:17.004797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.075 [2024-12-12 10:40:17.004806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.075 [2024-12-12 10:40:17.004966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.075 [2024-12-12 10:40:17.005127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:17.005136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:17.005142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:17.005149] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:17.017183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.075 [2024-12-12 10:40:17.017613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.075 [2024-12-12 10:40:17.017658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.075 [2024-12-12 10:40:17.017683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.075 [2024-12-12 10:40:17.018267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.075 [2024-12-12 10:40:17.018614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:17.018639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:17.018647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:17.018653] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:17.030005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.075 [2024-12-12 10:40:17.030424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.075 [2024-12-12 10:40:17.030441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.075 [2024-12-12 10:40:17.030449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.075 [2024-12-12 10:40:17.030631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.075 [2024-12-12 10:40:17.030800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:17.030810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:17.030817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:17.030824] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:17.042766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.075 [2024-12-12 10:40:17.043181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.075 [2024-12-12 10:40:17.043199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.075 [2024-12-12 10:40:17.043207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.075 [2024-12-12 10:40:17.043376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.075 [2024-12-12 10:40:17.043544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:17.043554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:17.043560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:17.043567] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:17.055766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.075 [2024-12-12 10:40:17.056163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.075 [2024-12-12 10:40:17.056208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.075 [2024-12-12 10:40:17.056240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.075 [2024-12-12 10:40:17.056839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.075 [2024-12-12 10:40:17.057421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:17.057431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:17.057437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:17.057444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:17.068614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.075 [2024-12-12 10:40:17.069023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.075 [2024-12-12 10:40:17.069040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.075 [2024-12-12 10:40:17.069047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.075 [2024-12-12 10:40:17.069207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.075 [2024-12-12 10:40:17.069366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.075 [2024-12-12 10:40:17.069376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.075 [2024-12-12 10:40:17.069382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.075 [2024-12-12 10:40:17.069388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.075 [2024-12-12 10:40:17.081384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.076 [2024-12-12 10:40:17.081805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.076 [2024-12-12 10:40:17.081822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.076 [2024-12-12 10:40:17.081829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.076 [2024-12-12 10:40:17.081991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.076 [2024-12-12 10:40:17.082151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.076 [2024-12-12 10:40:17.082161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.076 [2024-12-12 10:40:17.082167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.076 [2024-12-12 10:40:17.082174] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.076 [2024-12-12 10:40:17.094490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.076 [2024-12-12 10:40:17.094932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.076 [2024-12-12 10:40:17.094950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.076 [2024-12-12 10:40:17.094959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.076 [2024-12-12 10:40:17.095132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.337 [2024-12-12 10:40:17.095308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.337 [2024-12-12 10:40:17.095318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.337 [2024-12-12 10:40:17.095325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.337 [2024-12-12 10:40:17.095332] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.337 [2024-12-12 10:40:17.107424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.337 [2024-12-12 10:40:17.107840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.337 [2024-12-12 10:40:17.107857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.337 [2024-12-12 10:40:17.107865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.337 [2024-12-12 10:40:17.108025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.337 [2024-12-12 10:40:17.108184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.337 [2024-12-12 10:40:17.108194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.337 [2024-12-12 10:40:17.108200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.337 [2024-12-12 10:40:17.108206] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.337 [2024-12-12 10:40:17.120178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.337 [2024-12-12 10:40:17.120519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.337 [2024-12-12 10:40:17.120536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.337 [2024-12-12 10:40:17.120543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.337 [2024-12-12 10:40:17.120728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.337 [2024-12-12 10:40:17.120897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.337 [2024-12-12 10:40:17.120906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.337 [2024-12-12 10:40:17.120913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.337 [2024-12-12 10:40:17.120919] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.337 [2024-12-12 10:40:17.132915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.337 [2024-12-12 10:40:17.133345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.337 [2024-12-12 10:40:17.133389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.337 [2024-12-12 10:40:17.133414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.337 [2024-12-12 10:40:17.133906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.337 [2024-12-12 10:40:17.134076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.337 [2024-12-12 10:40:17.134087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.337 [2024-12-12 10:40:17.134093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.337 [2024-12-12 10:40:17.134104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.337 [2024-12-12 10:40:17.145783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.337 [2024-12-12 10:40:17.146197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.337 [2024-12-12 10:40:17.146214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.337 [2024-12-12 10:40:17.146221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.337 [2024-12-12 10:40:17.146380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.337 [2024-12-12 10:40:17.146540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.337 [2024-12-12 10:40:17.146549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.337 [2024-12-12 10:40:17.146556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.337 [2024-12-12 10:40:17.146562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.337 [2024-12-12 10:40:17.158539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.337 [2024-12-12 10:40:17.158965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.337 [2024-12-12 10:40:17.159009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.337 [2024-12-12 10:40:17.159033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.337 [2024-12-12 10:40:17.159630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.337 [2024-12-12 10:40:17.159997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.337 [2024-12-12 10:40:17.160006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.337 [2024-12-12 10:40:17.160013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.337 [2024-12-12 10:40:17.160019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.337 [2024-12-12 10:40:17.171292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.337 [2024-12-12 10:40:17.171706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.337 [2024-12-12 10:40:17.171758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.337 [2024-12-12 10:40:17.171783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.337 [2024-12-12 10:40:17.172366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.337 [2024-12-12 10:40:17.172588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.337 [2024-12-12 10:40:17.172598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.337 [2024-12-12 10:40:17.172605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.337 [2024-12-12 10:40:17.172611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.337 [2024-12-12 10:40:17.184272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.337 [2024-12-12 10:40:17.184672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.337 [2024-12-12 10:40:17.184690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.337 [2024-12-12 10:40:17.184698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.337 [2024-12-12 10:40:17.184859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.337 [2024-12-12 10:40:17.185019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.337 [2024-12-12 10:40:17.185028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.337 [2024-12-12 10:40:17.185034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.337 [2024-12-12 10:40:17.185041] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.337 [2024-12-12 10:40:17.197172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.337 [2024-12-12 10:40:17.197612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.337 [2024-12-12 10:40:17.197659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.338 [2024-12-12 10:40:17.197684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.338 [2024-12-12 10:40:17.198267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.338 [2024-12-12 10:40:17.198879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.338 [2024-12-12 10:40:17.198908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.338 [2024-12-12 10:40:17.198931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.338 [2024-12-12 10:40:17.198950] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.338 [2024-12-12 10:40:17.209902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.338 [2024-12-12 10:40:17.210315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.338 [2024-12-12 10:40:17.210332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.338 [2024-12-12 10:40:17.210340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.338 [2024-12-12 10:40:17.210498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.338 [2024-12-12 10:40:17.210682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.338 [2024-12-12 10:40:17.210692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.338 [2024-12-12 10:40:17.210699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.338 [2024-12-12 10:40:17.210706] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.338 [2024-12-12 10:40:17.222739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.338 [2024-12-12 10:40:17.223147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.338 [2024-12-12 10:40:17.223165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.338 [2024-12-12 10:40:17.223172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.338 [2024-12-12 10:40:17.223336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.338 [2024-12-12 10:40:17.223496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.338 [2024-12-12 10:40:17.223505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.338 [2024-12-12 10:40:17.223511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.338 [2024-12-12 10:40:17.223517] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.338 [2024-12-12 10:40:17.235487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.338 [2024-12-12 10:40:17.235898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.338 [2024-12-12 10:40:17.235944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.338 [2024-12-12 10:40:17.235970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.338 [2024-12-12 10:40:17.236552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.338 [2024-12-12 10:40:17.236995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.338 [2024-12-12 10:40:17.237005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.338 [2024-12-12 10:40:17.237011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.338 [2024-12-12 10:40:17.237018] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.338 [2024-12-12 10:40:17.248234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.338 [2024-12-12 10:40:17.248665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.338 [2024-12-12 10:40:17.248711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.338 [2024-12-12 10:40:17.248736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.338 [2024-12-12 10:40:17.249320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.338 [2024-12-12 10:40:17.249918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.338 [2024-12-12 10:40:17.249947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.338 [2024-12-12 10:40:17.249969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.338 [2024-12-12 10:40:17.249989] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.338 [2024-12-12 10:40:17.261072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.338 [2024-12-12 10:40:17.261480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.338 [2024-12-12 10:40:17.261497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.338 [2024-12-12 10:40:17.261504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.338 [2024-12-12 10:40:17.261689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.338 [2024-12-12 10:40:17.261858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.338 [2024-12-12 10:40:17.261871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.338 [2024-12-12 10:40:17.261878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.338 [2024-12-12 10:40:17.261885] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.338 [2024-12-12 10:40:17.273837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.338 [2024-12-12 10:40:17.274231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.338 [2024-12-12 10:40:17.274248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.338 [2024-12-12 10:40:17.274256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.338 [2024-12-12 10:40:17.274415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.338 [2024-12-12 10:40:17.274580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.338 [2024-12-12 10:40:17.274590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.338 [2024-12-12 10:40:17.274613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.338 [2024-12-12 10:40:17.274620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.338 [2024-12-12 10:40:17.286729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.338 [2024-12-12 10:40:17.287064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.338 [2024-12-12 10:40:17.287081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.338 [2024-12-12 10:40:17.287088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.338 [2024-12-12 10:40:17.287247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.338 [2024-12-12 10:40:17.287407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.338 [2024-12-12 10:40:17.287416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.338 [2024-12-12 10:40:17.287423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.338 [2024-12-12 10:40:17.287429] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.338 [2024-12-12 10:40:17.299531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.338 [2024-12-12 10:40:17.299927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.338 [2024-12-12 10:40:17.299945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.338 [2024-12-12 10:40:17.299953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.338 [2024-12-12 10:40:17.300122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.339 [2024-12-12 10:40:17.300290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.339 [2024-12-12 10:40:17.300299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.339 [2024-12-12 10:40:17.300307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.339 [2024-12-12 10:40:17.300318] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.339 [2024-12-12 10:40:17.312498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.339 [2024-12-12 10:40:17.312929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.339 [2024-12-12 10:40:17.312948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.339 [2024-12-12 10:40:17.312956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.339 [2024-12-12 10:40:17.313123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.339 [2024-12-12 10:40:17.313292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.339 [2024-12-12 10:40:17.313302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.339 [2024-12-12 10:40:17.313308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.339 [2024-12-12 10:40:17.313315] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.339 [2024-12-12 10:40:17.325396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.339 [2024-12-12 10:40:17.325820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.339 [2024-12-12 10:40:17.325838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.339 [2024-12-12 10:40:17.325846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.339 [2024-12-12 10:40:17.326015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.339 [2024-12-12 10:40:17.326183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.339 [2024-12-12 10:40:17.326193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.339 [2024-12-12 10:40:17.326200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.339 [2024-12-12 10:40:17.326206] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.339 [2024-12-12 10:40:17.338159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.339 [2024-12-12 10:40:17.338574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.339 [2024-12-12 10:40:17.338592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.339 [2024-12-12 10:40:17.338600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.339 [2024-12-12 10:40:17.338759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.339 [2024-12-12 10:40:17.338919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.339 [2024-12-12 10:40:17.338929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.339 [2024-12-12 10:40:17.338935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.339 [2024-12-12 10:40:17.338941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.339 [2024-12-12 10:40:17.350999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.339 [2024-12-12 10:40:17.351416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.339 [2024-12-12 10:40:17.351459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.339 [2024-12-12 10:40:17.351483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.339 [2024-12-12 10:40:17.351930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.339 [2024-12-12 10:40:17.352100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.339 [2024-12-12 10:40:17.352110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.339 [2024-12-12 10:40:17.352117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.339 [2024-12-12 10:40:17.352124] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.600 [2024-12-12 10:40:17.363888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.600 [2024-12-12 10:40:17.364311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-12-12 10:40:17.364329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.600 [2024-12-12 10:40:17.364336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.600 [2024-12-12 10:40:17.364505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.600 [2024-12-12 10:40:17.364697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.600 [2024-12-12 10:40:17.364708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.600 [2024-12-12 10:40:17.364714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.600 [2024-12-12 10:40:17.364721] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.600 [2024-12-12 10:40:17.376622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.600 [2024-12-12 10:40:17.377041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-12-12 10:40:17.377085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.600 [2024-12-12 10:40:17.377110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.600 [2024-12-12 10:40:17.377509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.600 [2024-12-12 10:40:17.377693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.600 [2024-12-12 10:40:17.377704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.600 [2024-12-12 10:40:17.377710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.600 [2024-12-12 10:40:17.377717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.600 [2024-12-12 10:40:17.389421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.600 [2024-12-12 10:40:17.389818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.600 [2024-12-12 10:40:17.389835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.600 [2024-12-12 10:40:17.389843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.600 [2024-12-12 10:40:17.390017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.601 [2024-12-12 10:40:17.390185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.601 [2024-12-12 10:40:17.390195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.601 [2024-12-12 10:40:17.390202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.601 [2024-12-12 10:40:17.390208] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.601 [2024-12-12 10:40:17.402252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.601 [2024-12-12 10:40:17.402667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-12-12 10:40:17.402718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.601 [2024-12-12 10:40:17.402743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.601 [2024-12-12 10:40:17.403300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.601 [2024-12-12 10:40:17.403461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.601 [2024-12-12 10:40:17.403470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.601 [2024-12-12 10:40:17.403476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.601 [2024-12-12 10:40:17.403483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.601 [2024-12-12 10:40:17.415118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.601 [2024-12-12 10:40:17.415541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-12-12 10:40:17.415598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.601 [2024-12-12 10:40:17.415624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.601 [2024-12-12 10:40:17.416207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.601 [2024-12-12 10:40:17.416698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.601 [2024-12-12 10:40:17.416717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.601 [2024-12-12 10:40:17.416733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.601 [2024-12-12 10:40:17.416748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.601 [2024-12-12 10:40:17.429856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.601 [2024-12-12 10:40:17.430387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-12-12 10:40:17.430432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.601 [2024-12-12 10:40:17.430455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.601 [2024-12-12 10:40:17.431053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.601 [2024-12-12 10:40:17.431536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.601 [2024-12-12 10:40:17.431553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.601 [2024-12-12 10:40:17.431563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.601 [2024-12-12 10:40:17.431579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.601 [2024-12-12 10:40:17.442775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.601 [2024-12-12 10:40:17.443198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-12-12 10:40:17.443215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.601 [2024-12-12 10:40:17.443223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.601 [2024-12-12 10:40:17.443391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.601 [2024-12-12 10:40:17.443560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.601 [2024-12-12 10:40:17.443576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.601 [2024-12-12 10:40:17.443584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.601 [2024-12-12 10:40:17.443592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1668078 Killed "${NVMF_APP[@]}" "$@" 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:43.601 [2024-12-12 10:40:17.455690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:43.601 [2024-12-12 10:40:17.456119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-12-12 10:40:17.456137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.601 [2024-12-12 10:40:17.456145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.601 [2024-12-12 10:40:17.456313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.601 [2024-12-12 10:40:17.456482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.601 [2024-12-12 10:40:17.456492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.601 [2024-12-12 10:40:17.456498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.601 [2024-12-12 10:40:17.456505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1669232 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1669232 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1669232 ']' 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.601 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.601 [2024-12-12 10:40:17.468696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.601 [2024-12-12 10:40:17.469053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-12-12 10:40:17.469070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.601 [2024-12-12 10:40:17.469078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.601 [2024-12-12 10:40:17.469252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.601 [2024-12-12 10:40:17.469425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.601 [2024-12-12 10:40:17.469436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.601 [2024-12-12 10:40:17.469444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.601 [2024-12-12 10:40:17.469453] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.601 [2024-12-12 10:40:17.481820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.601 [2024-12-12 10:40:17.482243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.601 [2024-12-12 10:40:17.482262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.601 [2024-12-12 10:40:17.482270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.601 [2024-12-12 10:40:17.482444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.601 [2024-12-12 10:40:17.482625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.601 [2024-12-12 10:40:17.482636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.601 [2024-12-12 10:40:17.482643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.601 [2024-12-12 10:40:17.482650] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.601 [2024-12-12 10:40:17.494836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.601 [2024-12-12 10:40:17.495194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-12-12 10:40:17.495211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.602 [2024-12-12 10:40:17.495219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.602 [2024-12-12 10:40:17.495388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.602 [2024-12-12 10:40:17.495557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.602 [2024-12-12 10:40:17.495567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.602 [2024-12-12 10:40:17.495583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.602 [2024-12-12 10:40:17.495590] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.602 [2024-12-12 10:40:17.507798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.602 [2024-12-12 10:40:17.508224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-12-12 10:40:17.508242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.602 [2024-12-12 10:40:17.508250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.602 [2024-12-12 10:40:17.508424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.602 [2024-12-12 10:40:17.508605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.602 [2024-12-12 10:40:17.508616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.602 [2024-12-12 10:40:17.508623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.602 [2024-12-12 10:40:17.508629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.602 [2024-12-12 10:40:17.511148] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:26:43.602 [2024-12-12 10:40:17.511189] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.602 [2024-12-12 10:40:17.520985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.602 [2024-12-12 10:40:17.521422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-12-12 10:40:17.521439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.602 [2024-12-12 10:40:17.521447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.602 [2024-12-12 10:40:17.521625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.602 [2024-12-12 10:40:17.521797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.602 [2024-12-12 10:40:17.521807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.602 [2024-12-12 10:40:17.521814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.602 [2024-12-12 10:40:17.521821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.602 [2024-12-12 10:40:17.533981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.602 [2024-12-12 10:40:17.534407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-12-12 10:40:17.534425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.602 [2024-12-12 10:40:17.534433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.602 [2024-12-12 10:40:17.534608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.602 [2024-12-12 10:40:17.534777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.602 [2024-12-12 10:40:17.534788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.602 [2024-12-12 10:40:17.534800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.602 [2024-12-12 10:40:17.534808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.602 [2024-12-12 10:40:17.547008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.602 [2024-12-12 10:40:17.547412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-12-12 10:40:17.547430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.602 [2024-12-12 10:40:17.547438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.602 [2024-12-12 10:40:17.547612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.602 [2024-12-12 10:40:17.547781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.602 [2024-12-12 10:40:17.547790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.602 [2024-12-12 10:40:17.547797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.602 [2024-12-12 10:40:17.547804] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.602 [2024-12-12 10:40:17.559932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.602 [2024-12-12 10:40:17.560353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-12-12 10:40:17.560372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.602 [2024-12-12 10:40:17.560380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.602 [2024-12-12 10:40:17.560554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.602 [2024-12-12 10:40:17.560735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.602 [2024-12-12 10:40:17.560746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.602 [2024-12-12 10:40:17.560753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.602 [2024-12-12 10:40:17.560760] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.602 [2024-12-12 10:40:17.572931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.602 [2024-12-12 10:40:17.573334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-12-12 10:40:17.573352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.602 [2024-12-12 10:40:17.573360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.602 [2024-12-12 10:40:17.573529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.602 [2024-12-12 10:40:17.573724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.602 [2024-12-12 10:40:17.573734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.602 [2024-12-12 10:40:17.573741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.602 [2024-12-12 10:40:17.573747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.602 [2024-12-12 10:40:17.585965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.602 [2024-12-12 10:40:17.586328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-12-12 10:40:17.586347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.602 [2024-12-12 10:40:17.586356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.602 [2024-12-12 10:40:17.586528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.602 [2024-12-12 10:40:17.586724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.602 [2024-12-12 10:40:17.586735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.602 [2024-12-12 10:40:17.586742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.602 [2024-12-12 10:40:17.586749] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.602 [2024-12-12 10:40:17.590078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.602 [2024-12-12 10:40:17.599087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.602 [2024-12-12 10:40:17.599530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.602 [2024-12-12 10:40:17.599548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.602 [2024-12-12 10:40:17.599557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.602 [2024-12-12 10:40:17.599738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.602 [2024-12-12 10:40:17.599913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.602 [2024-12-12 10:40:17.599924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.602 [2024-12-12 10:40:17.599932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.602 [2024-12-12 10:40:17.599939] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.602 [2024-12-12 10:40:17.612036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.602 [2024-12-12 10:40:17.612462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.603 [2024-12-12 10:40:17.612481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.603 [2024-12-12 10:40:17.612490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.603 [2024-12-12 10:40:17.612665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.603 [2024-12-12 10:40:17.612835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.603 [2024-12-12 10:40:17.612845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.603 [2024-12-12 10:40:17.612852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.603 [2024-12-12 10:40:17.612858] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.863 [2024-12-12 10:40:17.625117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.863 [2024-12-12 10:40:17.625470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-12-12 10:40:17.625492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.863 [2024-12-12 10:40:17.625499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.864 [2024-12-12 10:40:17.625694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.864 [2024-12-12 10:40:17.625880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.864 [2024-12-12 10:40:17.625890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.864 [2024-12-12 10:40:17.625898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.864 [2024-12-12 10:40:17.625907] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.864 [2024-12-12 10:40:17.631622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.864 [2024-12-12 10:40:17.631647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.864 [2024-12-12 10:40:17.631656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.864 [2024-12-12 10:40:17.631663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.864 [2024-12-12 10:40:17.631668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.864 [2024-12-12 10:40:17.632955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.864 [2024-12-12 10:40:17.632987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.864 [2024-12-12 10:40:17.632989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.864 [2024-12-12 10:40:17.638214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.864 [2024-12-12 10:40:17.638567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-12-12 10:40:17.638594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.864 [2024-12-12 10:40:17.638603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.864 [2024-12-12 10:40:17.638778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.864 [2024-12-12 10:40:17.638954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.864 [2024-12-12 10:40:17.638964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.864 [2024-12-12 10:40:17.638972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.864 [2024-12-12 10:40:17.638980] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.864 [2024-12-12 10:40:17.651338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.864 [2024-12-12 10:40:17.651670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-12-12 10:40:17.651692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.864 [2024-12-12 10:40:17.651701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.864 [2024-12-12 10:40:17.651877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.864 [2024-12-12 10:40:17.652052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.864 [2024-12-12 10:40:17.652062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.864 [2024-12-12 10:40:17.652078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.864 [2024-12-12 10:40:17.652087] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.864 [2024-12-12 10:40:17.664456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.864 [2024-12-12 10:40:17.664895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-12-12 10:40:17.664918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.864 [2024-12-12 10:40:17.664927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.864 [2024-12-12 10:40:17.665102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.864 [2024-12-12 10:40:17.665278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.864 [2024-12-12 10:40:17.665288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.864 [2024-12-12 10:40:17.665297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.864 [2024-12-12 10:40:17.665305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.864 [2024-12-12 10:40:17.677469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.864 [2024-12-12 10:40:17.677904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-12-12 10:40:17.677926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.864 [2024-12-12 10:40:17.677935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.864 [2024-12-12 10:40:17.678111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.864 [2024-12-12 10:40:17.678287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.864 [2024-12-12 10:40:17.678299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.864 [2024-12-12 10:40:17.678307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.864 [2024-12-12 10:40:17.678315] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.864 [2024-12-12 10:40:17.690531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.864 [2024-12-12 10:40:17.690946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-12-12 10:40:17.690967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.864 [2024-12-12 10:40:17.690977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.864 [2024-12-12 10:40:17.691152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.864 [2024-12-12 10:40:17.691327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.864 [2024-12-12 10:40:17.691338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.864 [2024-12-12 10:40:17.691346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.864 [2024-12-12 10:40:17.691353] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.864 [2024-12-12 10:40:17.703577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.864 [2024-12-12 10:40:17.703997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-12-12 10:40:17.704016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.864 [2024-12-12 10:40:17.704024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.864 [2024-12-12 10:40:17.704199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.864 [2024-12-12 10:40:17.704375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.864 [2024-12-12 10:40:17.704385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.864 [2024-12-12 10:40:17.704392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.864 [2024-12-12 10:40:17.704399] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.864 [2024-12-12 10:40:17.716587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.864 [2024-12-12 10:40:17.716958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-12-12 10:40:17.716977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.864 [2024-12-12 10:40:17.716985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.864 [2024-12-12 10:40:17.717160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.864 [2024-12-12 10:40:17.717336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.864 [2024-12-12 10:40:17.717348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.864 [2024-12-12 10:40:17.717356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.864 [2024-12-12 10:40:17.717366] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.864 [2024-12-12 10:40:17.729546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.864 [2024-12-12 10:40:17.729887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-12-12 10:40:17.729906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.864 [2024-12-12 10:40:17.729914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.864 [2024-12-12 10:40:17.730088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.864 [2024-12-12 10:40:17.730262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.864 [2024-12-12 10:40:17.730273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.864 [2024-12-12 10:40:17.730280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.865 [2024-12-12 10:40:17.730287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.865 [2024-12-12 10:40:17.742639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.865 [2024-12-12 10:40:17.743025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-12-12 10:40:17.743043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.865 [2024-12-12 10:40:17.743051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.865 [2024-12-12 10:40:17.743224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.865 [2024-12-12 10:40:17.743399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.865 [2024-12-12 10:40:17.743409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.865 [2024-12-12 10:40:17.743416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.865 [2024-12-12 10:40:17.743422] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.865 [2024-12-12 10:40:17.755617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.865 [2024-12-12 10:40:17.755956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-12-12 10:40:17.755974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.865 [2024-12-12 10:40:17.755982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.865 [2024-12-12 10:40:17.756156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.865 [2024-12-12 10:40:17.756331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.865 [2024-12-12 10:40:17.756341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.865 [2024-12-12 10:40:17.756349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.865 [2024-12-12 10:40:17.756355] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.865 [2024-12-12 10:40:17.768713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.865 [2024-12-12 10:40:17.769008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-12-12 10:40:17.769026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.865 [2024-12-12 10:40:17.769034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.865 [2024-12-12 10:40:17.769208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.865 [2024-12-12 10:40:17.769381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.865 [2024-12-12 10:40:17.769392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.865 [2024-12-12 10:40:17.769403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.865 [2024-12-12 10:40:17.769414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.865 [2024-12-12 10:40:17.777722] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.865 [2024-12-12 10:40:17.781778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.865 [2024-12-12 10:40:17.782061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-12-12 10:40:17.782079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.865 [2024-12-12 10:40:17.782088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.865 [2024-12-12 10:40:17.782261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.865 [2024-12-12 10:40:17.782437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.865 [2024-12-12 10:40:17.782449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.865 [2024-12-12 10:40:17.782457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.865 [2024-12-12 10:40:17.782464] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.865 [2024-12-12 10:40:17.794829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.865 [2024-12-12 10:40:17.795169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-12-12 10:40:17.795187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.865 [2024-12-12 10:40:17.795195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.865 [2024-12-12 10:40:17.795370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.865 [2024-12-12 10:40:17.795545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.865 [2024-12-12 10:40:17.795555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.865 [2024-12-12 10:40:17.795562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.865 [2024-12-12 10:40:17.795573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.865 [2024-12-12 10:40:17.807943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.865 [2024-12-12 10:40:17.808377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-12-12 10:40:17.808395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.865 [2024-12-12 10:40:17.808404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.865 [2024-12-12 10:40:17.808595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.865 [2024-12-12 10:40:17.808770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.865 [2024-12-12 10:40:17.808785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.865 [2024-12-12 10:40:17.808792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.865 [2024-12-12 10:40:17.808799] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.865 Malloc0 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.865 [2024-12-12 10:40:17.821000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.865 [2024-12-12 10:40:17.821336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-12-12 10:40:17.821355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.865 [2024-12-12 10:40:17.821363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.865 [2024-12-12 10:40:17.821536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.865 [2024-12-12 10:40:17.821714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.865 [2024-12-12 10:40:17.821724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.865 [2024-12-12 10:40:17.821731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.865 [2024-12-12 10:40:17.821738] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.865 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.865 [2024-12-12 10:40:17.834087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.866 [2024-12-12 10:40:17.834519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-12-12 10:40:17.834537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bb7e0 with addr=10.0.0.2, port=4420 00:26:43.866 [2024-12-12 10:40:17.834545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bb7e0 is same with the state(6) to be set 00:26:43.866 [2024-12-12 10:40:17.834724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bb7e0 (9): Bad file descriptor 00:26:43.866 [2024-12-12 10:40:17.834898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.866 [2024-12-12 10:40:17.834909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.866 [2024-12-12 10:40:17.834916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.866 [2024-12-12 10:40:17.834922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.866 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.866 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.866 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.866 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.866 [2024-12-12 10:40:17.843282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.866 [2024-12-12 10:40:17.847116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.866 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.866 10:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1668326 00:26:43.866 [2024-12-12 10:40:17.870143] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:45.061 4895.33 IOPS, 19.12 MiB/s [2024-12-12T09:40:20.022Z] 5826.86 IOPS, 22.76 MiB/s [2024-12-12T09:40:20.960Z] 6520.38 IOPS, 25.47 MiB/s [2024-12-12T09:40:22.338Z] 7045.22 IOPS, 27.52 MiB/s [2024-12-12T09:40:23.275Z] 7474.90 IOPS, 29.20 MiB/s [2024-12-12T09:40:24.213Z] 7827.09 IOPS, 30.57 MiB/s [2024-12-12T09:40:25.150Z] 8139.17 IOPS, 31.79 MiB/s [2024-12-12T09:40:26.087Z] 8370.08 IOPS, 32.70 MiB/s [2024-12-12T09:40:27.024Z] 8595.21 IOPS, 33.58 MiB/s [2024-12-12T09:40:27.024Z] 8785.47 IOPS, 34.32 MiB/s 00:26:53.001 Latency(us) 00:26:53.001 [2024-12-12T09:40:27.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.001 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:53.001 Verification LBA range: start 0x0 length 0x4000 00:26:53.001 Nvme1n1 : 15.01 8788.07 34.33 11020.29 0.00 6442.12 596.85 18599.74 00:26:53.001 [2024-12-12T09:40:27.024Z] =================================================================================================================== 00:26:53.001 [2024-12-12T09:40:27.024Z] Total : 8788.07 34.33 11020.29 0.00 6442.12 596.85 18599.74 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.261 rmmod nvme_tcp 00:26:53.261 rmmod nvme_fabrics 00:26:53.261 rmmod nvme_keyring 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1669232 ']' 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1669232 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1669232 ']' 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1669232 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1669232 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1669232' 00:26:53.261 killing process with pid 1669232 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1669232 00:26:53.261 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1669232 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.521 10:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.058 00:26:56.058 real 0m26.003s 00:26:56.058 user 1m0.777s 00:26:56.058 sys 0m6.713s 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.058 ************************************ 00:26:56.058 END TEST nvmf_bdevperf 00:26:56.058 ************************************ 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.058 ************************************ 00:26:56.058 START TEST nvmf_target_disconnect 00:26:56.058 ************************************ 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:56.058 * Looking for test storage... 00:26:56.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:56.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.058 --rc genhtml_branch_coverage=1 00:26:56.058 --rc genhtml_function_coverage=1 00:26:56.058 --rc genhtml_legend=1 00:26:56.058 --rc geninfo_all_blocks=1 00:26:56.058 --rc geninfo_unexecuted_blocks=1 00:26:56.058 00:26:56.058 ' 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:56.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.058 --rc genhtml_branch_coverage=1 00:26:56.058 --rc genhtml_function_coverage=1 00:26:56.058 --rc genhtml_legend=1 00:26:56.058 --rc geninfo_all_blocks=1 00:26:56.058 --rc geninfo_unexecuted_blocks=1 00:26:56.058 00:26:56.058 ' 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:56.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.058 --rc genhtml_branch_coverage=1 00:26:56.058 --rc genhtml_function_coverage=1 00:26:56.058 --rc genhtml_legend=1 00:26:56.058 --rc geninfo_all_blocks=1 00:26:56.058 --rc geninfo_unexecuted_blocks=1 00:26:56.058 00:26:56.058 ' 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:56.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.058 --rc genhtml_branch_coverage=1 00:26:56.058 --rc genhtml_function_coverage=1 00:26:56.058 --rc genhtml_legend=1 00:26:56.058 --rc geninfo_all_blocks=1 00:26:56.058 --rc geninfo_unexecuted_blocks=1 00:26:56.058 00:26:56.058 ' 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:56.058 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.059 10:40:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:01.334 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:01.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:01.335 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:01.335 Found net devices under 0000:af:00.0: cvl_0_0 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:01.335 Found net devices under 0000:af:00.1: cvl_0_1 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.335 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:01.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:27:01.593 00:27:01.593 --- 10.0.0.2 ping statistics --- 00:27:01.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.593 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:27:01.593 00:27:01.593 --- 10.0.0.1 ping statistics --- 00:27:01.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.593 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.593 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.594 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:01.853 ************************************ 00:27:01.853 START TEST nvmf_target_disconnect_tc1 00:27:01.853 ************************************ 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.853 [2024-12-12 10:40:35.759718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.853 [2024-12-12 10:40:35.759760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18680b0 with addr=10.0.0.2, port=4420 00:27:01.853 [2024-12-12 10:40:35.759777] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:01.853 [2024-12-12 10:40:35.759790] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:01.853 [2024-12-12 10:40:35.759796] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:01.853 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:01.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:01.853 Initializing NVMe Controllers 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:01.853 00:27:01.853 real 0m0.118s 00:27:01.853 user 0m0.056s 00:27:01.853 sys 0m0.063s 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:01.853 ************************************ 00:27:01.853 END TEST nvmf_target_disconnect_tc1 00:27:01.853 ************************************ 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:01.853 ************************************ 00:27:01.853 START TEST nvmf_target_disconnect_tc2 00:27:01.853 ************************************ 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1674303 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1674303 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1674303 ']' 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.853 10:40:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.112 [2024-12-12 10:40:35.900638] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:27:02.112 [2024-12-12 10:40:35.900678] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.112 [2024-12-12 10:40:35.976565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:02.112 [2024-12-12 10:40:36.018114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.112 [2024-12-12 10:40:36.018150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.112 [2024-12-12 10:40:36.018157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.112 [2024-12-12 10:40:36.018162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.112 [2024-12-12 10:40:36.018167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.112 [2024-12-12 10:40:36.019697] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:27:02.112 [2024-12-12 10:40:36.019823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:27:02.112 [2024-12-12 10:40:36.019929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:27:02.112 [2024-12-12 10:40:36.019931] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:27:02.112 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.112 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:02.112 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.112 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:02.112 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.370 Malloc0 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.370 [2024-12-12 10:40:36.187850] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.370 [2024-12-12 10:40:36.216851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1674427 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:02.370 10:40:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.377 10:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1674303 00:27:04.377 10:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 [2024-12-12 10:40:38.249377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 [2024-12-12 10:40:38.249591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 [2024-12-12 10:40:38.249810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Write completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 Read completed with error (sct=0, sc=8) 00:27:04.378 starting I/O failed 00:27:04.378 [2024-12-12 10:40:38.250017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.378 [2024-12-12 10:40:38.250222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.250245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.250405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.250418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.250624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.250660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.250796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.250830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.251013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.251046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.251207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.251239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.251436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.251468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.251602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.251636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.251778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.251818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.251943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.251977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.252118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.252150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.252294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.252326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.252449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.252482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.252619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.378 [2024-12-12 10:40:38.252654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.378 qpair failed and we were unable to recover it. 00:27:04.378 [2024-12-12 10:40:38.252774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.252805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.252924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.252957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.253095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.253128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.253225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.253236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.253363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.253375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.253458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.253469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.253537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.253547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.253638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.253650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.253740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.253752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.253903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.253927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.254029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.254054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.254166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.254193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.254358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.254383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.254489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.254515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.254641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.254667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.254763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.254789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.254894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.254919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.255030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.255056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.255255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.255281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.255434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.255459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.255559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.255594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.255725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.255751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.255920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.255945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.256059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.256084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.256266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.256292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.256383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.256407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.256519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.256542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.256728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.256755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.256932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.256958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.257060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.257086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.257196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.257221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.257312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.257336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.257511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.257543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.379 qpair failed and we were unable to recover it. 00:27:04.379 [2024-12-12 10:40:38.257686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.379 [2024-12-12 10:40:38.257719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.257919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.257957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.258078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.258111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.258255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.258286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.258475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.258508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.258693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.258728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.258910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.258942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.259061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.259087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.259186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.259209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.259459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.259486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.259650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.259678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.259770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.259794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.259917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.259942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.260032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.260056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.260213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.260255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.260386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.260419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.260525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.260555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.260745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.260777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.261013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.261046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.261271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.261296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.261460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.261486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.261671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.261698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.261922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.261947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.262048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.262073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.262190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.262216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.262394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.262419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.262598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.262632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.262813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.262845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.263040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.263085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.263245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.263271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.263432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.263458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.263624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.263651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.263845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.263873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.263967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.263994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.264158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.264187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.380 [2024-12-12 10:40:38.264281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.380 [2024-12-12 10:40:38.264308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.380 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.264406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.264434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.264618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.264647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.264830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.264857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.264963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.264990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.265150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.265177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.265286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.265317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.265563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.265608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.265801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.265828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.265932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.265956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.266050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.266078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.266174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.266201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.266389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.266417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.266695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.266724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.266894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.266921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.267086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.267120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.267241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.267273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.267480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.267511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.267634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.267668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.267868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.267895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.268007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.268035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.268228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.268260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.268432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.268464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.268722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.268756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.268889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.268922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.269166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.269194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.269367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.269394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.269624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.269653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.269834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.269867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.270035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.270068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.270199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.270231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.270473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.270507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.381 [2024-12-12 10:40:38.270636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.381 [2024-12-12 10:40:38.270670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.381 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.270788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.270820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.270945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.270977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.271146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.271178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.271312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.271344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.271558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.271599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.271770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.271802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.272064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.272096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.272336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.272364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.272468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.272494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.272696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.272726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.272895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.272927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.273053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.273086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.273200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.273232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.273350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.273393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.273525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.273558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.273763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.273797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.273916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.273948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.274054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.274087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.274275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.274307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.274479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.274511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.274729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.274764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.274948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.274980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.275112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.275144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.275326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.275359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.275543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.275584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.275779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.275812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.275928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.275960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.276085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.382 [2024-12-12 10:40:38.276117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.382 qpair failed and we were unable to recover it. 00:27:04.382 [2024-12-12 10:40:38.276243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.276276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.276448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.276482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.276669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.276704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.276881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.276913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.277085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.277117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.277380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.277411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.277587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.277621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.277726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.277758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.277875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.277908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.278022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.278054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.278238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.278270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.278386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.278419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.278693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.278726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.278900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.278934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.279114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.279147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.279331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.279364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.279548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.279591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.279763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.279796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.279981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.280014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.280269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.280301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.280474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.280507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.280704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.280737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.280981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.281013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.281144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.281178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.281369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.281402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.281666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.281706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.281954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.281986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.282093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.282125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.282307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.282340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.282588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.282622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.282794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.282828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.283013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.283045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.283219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.283252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.283377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.283410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.283589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.283622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.283744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.383 [2024-12-12 10:40:38.283777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.383 qpair failed and we were unable to recover it. 00:27:04.383 [2024-12-12 10:40:38.284043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.284076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.284253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.284285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.284424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.284456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.284604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.284638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.284853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.284886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.285149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.285181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.285356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.285389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.285525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.285557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.285749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.285781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.285965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.285997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.286167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.286200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.286463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.286495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.286680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.286716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.286960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.286994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.287202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.287235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.287512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.287544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.287797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.287832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.288014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.288047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.288230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.288262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.288389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.288421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.288662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.288695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.288868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.288900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.289020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.289052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.289194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.289226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.289462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.289495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.289618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.289652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.289781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.289813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.289984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.290017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.290126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.290157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.290342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.384 [2024-12-12 10:40:38.290380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.384 qpair failed and we were unable to recover it. 00:27:04.384 [2024-12-12 10:40:38.290562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.290619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.290742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.290775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.290954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.290986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.291210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.291243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.291453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.291485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.291662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.291696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.291803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.291835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.291958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.291990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.292200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.292232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.292468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.292501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.292765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.292799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.292978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.293010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.293202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.293235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.293356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.293389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.293502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.293535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.293741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.293775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.294043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.294076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.294199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.294231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.294347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.294380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.294481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.294514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.294646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.294680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.294850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.294882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.295067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.295099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.295220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.295252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.295423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.295456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.295630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.295664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.385 [2024-12-12 10:40:38.295880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.385 [2024-12-12 10:40:38.295914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.385 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.296183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.296215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.296403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.296435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.296560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.296601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.296774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.296806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.297045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.297077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.297266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.297298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.297483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.297515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.297722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.297755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.298020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.298052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.298288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.298319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.298509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.298542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.298709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.298743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.299022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.299061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.299258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.299290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.299406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.299439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.299566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.299609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.299813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.299846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.300108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.300141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.300257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.300290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.300525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.300558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.300809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.300843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.301048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.301081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.301207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.301240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.301413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.301445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.301636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.301671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.301912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.301944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.386 qpair failed and we were unable to recover it. 00:27:04.386 [2024-12-12 10:40:38.302069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.386 [2024-12-12 10:40:38.302102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.302345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.302377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.302588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.302621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.302739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.302774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.303010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.303043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.303279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.303312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.303491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.303524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.303742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.303776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.304050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.304083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.304296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.304328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.304577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.304611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.304737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.304769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.304975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.305007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.305285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.305364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.305580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.305619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.305760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.305794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.306056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.306089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.306261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.306294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.306421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.306455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.306733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.306768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.306954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.306987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.307196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.307229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.307417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.307449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.307621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.307656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.307796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.307828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.307999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.387 [2024-12-12 10:40:38.308032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.387 qpair failed and we were unable to recover it. 00:27:04.387 [2024-12-12 10:40:38.308300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.308333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.308608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.308642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.308828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.308860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.309045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.309078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.309248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.309281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.309458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.309491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.309614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.309648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.309843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.309875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.310065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.310098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.310350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.310387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.310525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.310558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.310758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.310792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.310976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.311009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.311274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.311306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.311493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.311533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.311756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.311791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.311987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.312019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.312155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.312188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.312378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.312411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.312600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.312635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.388 qpair failed and we were unable to recover it. 00:27:04.388 [2024-12-12 10:40:38.312843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.388 [2024-12-12 10:40:38.312876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.313116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.313149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.313387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.313420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.313613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.313647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.313844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.313878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.314061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.314095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.314280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.314313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.314483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.314516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.314662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.314696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.314901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.314933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.315175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.315207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.315387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.315420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.315596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.315630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.315803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.315835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.316018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.316052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.316248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.316281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.316522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.316554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.316754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.316787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.316970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.317003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.317119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.317151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.317340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.317372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.317553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.317610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.317812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.317845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.318036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.318069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.318266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.318299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.318508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.318541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.318728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.318762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.318949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.389 [2024-12-12 10:40:38.318982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.389 qpair failed and we were unable to recover it. 00:27:04.389 [2024-12-12 10:40:38.319170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.319203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.319342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.319375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.319495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.319528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.319777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.319811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.319938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.319971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.320234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.320266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.320395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.320428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.320728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.320764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.321023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.321056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.321198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.321231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.321473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.321507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.321704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.321739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.321923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.321956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.322138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.322172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.322362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.322395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.322588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.322623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.322898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.322931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.323053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.323085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.323221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.323255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.323380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.323414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.323536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.323577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.323760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.323794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.323974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.324007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.324280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.324314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.324552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.324596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.324797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.324830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.325017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.325050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.325258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.390 [2024-12-12 10:40:38.325291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.390 qpair failed and we were unable to recover it. 00:27:04.390 [2024-12-12 10:40:38.325395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.325428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.325608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.325643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.325907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.325940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.326125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.326158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.326281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.326313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.326502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.326536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.326726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.326761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.326942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.326975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.327085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.327117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.327351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.327385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.327504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.327538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.327676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.327720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.327925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.327958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.328148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.328181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.328445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.328479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.328647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.328681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.328790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.328821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.329021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.329055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.329180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.329213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.329399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.329432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.329618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.329653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.329872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.329906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.330193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.330227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.330399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.330431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.330631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.330667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.330854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.330888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.331071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.391 [2024-12-12 10:40:38.331105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.391 qpair failed and we were unable to recover it. 00:27:04.391 [2024-12-12 10:40:38.331218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.331250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.331436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.331469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.331653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.331687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.331952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.331985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.332154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.332186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.332395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.332428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.332616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.332656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.332781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.332814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.333018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.333051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.333168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.333201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.333405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.333438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.333677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.333711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.333894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.333927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.334064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.334097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.334313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.334345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.334477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.334509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.334783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.334819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.334929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.334962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.335137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.335169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.335349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.335382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.335584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.335618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.335749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.335782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.336049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.336082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.336257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.336290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.336412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.336445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.336619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.336654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.336793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.336827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.336931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.336964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.337157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.337190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.337305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.337338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.337582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.337616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.337794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.392 [2024-12-12 10:40:38.337827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.392 qpair failed and we were unable to recover it. 00:27:04.392 [2024-12-12 10:40:38.338024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.338058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.338323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.338361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.338481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.338514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.338635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.338670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.338859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.338891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.339064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.339097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.339274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.339307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.339488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.339520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.339709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.339743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.339982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.340016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.340121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.340154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.340337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.340371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.340612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.340646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.340884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.340917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.341041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.341073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.341199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.341232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.341423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.341456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.341630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.341664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.341925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.341958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.342067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.342100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.342282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.342315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.342519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.342552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.342800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.342834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.343047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.343080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.343266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.343299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.343418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.343450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.343657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.343691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.343897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.343930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.344146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.344185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.344407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.344440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.344670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.344705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.344837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.344870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.345005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.393 [2024-12-12 10:40:38.345038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.393 qpair failed and we were unable to recover it. 00:27:04.393 [2024-12-12 10:40:38.345226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.345259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.345441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.345475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.345677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.345711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.345831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.345865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.346043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.346076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.346200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.346233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.346406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.346439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.346630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.346665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.346933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.346966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.347152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.347185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.347376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.347409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.347589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.347623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.347757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.347790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.347902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.347935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.348200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.348232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.348407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.348439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.348616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.348650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.348778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.348811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.348996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.349028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.349271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.349304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.349409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.349441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.349616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.349651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.349823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.349856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.349974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.350007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.350249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.350282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.350473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.350502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.350635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.350666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.350922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.350952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.351082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.351111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.351285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.351314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.351484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.351513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.394 [2024-12-12 10:40:38.351698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.394 [2024-12-12 10:40:38.351729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.394 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.351966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.351995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.352161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.352190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.352371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.352401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.352585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.352616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.352743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.352774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.353032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.353061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.353193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.353222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.353458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.353487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.353657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.353688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.353953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.353984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.354100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.354129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.354316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.354346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.354522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.354552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.354689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.354720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.354896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.354926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.355117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.355147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.355336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.355366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.355483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.355516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.355643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.355676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.355951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.355981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.356173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.356204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.356330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.356360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.356549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.356589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.356773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.356805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.356991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.357021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.357130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.357161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.357273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.357304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.357487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.357518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.357721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.357754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.357939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.357969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.358146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.358180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.358294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.358332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.358534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.358566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.358690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.358723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.358915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.358947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.359067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.359100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.359227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.395 [2024-12-12 10:40:38.359260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.395 qpair failed and we were unable to recover it. 00:27:04.395 [2024-12-12 10:40:38.359523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.359555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.359740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.359773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.359902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.359934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.360049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.360081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.360266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.360299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.360481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.360513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.360649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.360683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.360872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.360906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.361085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.361119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.361294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.361326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.361431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.361464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.361657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.361691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.361798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.361830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.362028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.362061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.362180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.362217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.362387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.362419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.362617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.362651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.362820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.362853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.362997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.363029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.363204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.363237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.363439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.363471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.363666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.363706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.363972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.364005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.364241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.364273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.364524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.364557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.364741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.364773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.364895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.364927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.365187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.365220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.365481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.365514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.365726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.365760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.366003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.366035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.366207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.366240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.366434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.366466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.366669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.366703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.366876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.366909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.367037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.367069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.367184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.367216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.367329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.396 [2024-12-12 10:40:38.367362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.396 qpair failed and we were unable to recover it. 00:27:04.396 [2024-12-12 10:40:38.367475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.367506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.367676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.367710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.367949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.367981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.368225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.368257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.368450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.368483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.368725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.368760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.369018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.369050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.369288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.369320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.369488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.369521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.369722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.369756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.369995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.370028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.370160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.370192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.370332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.370365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.370615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.370650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.370821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.370853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.371024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.371057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.371230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.371263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.371458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.371490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.371668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.371701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.371889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.371921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.372133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.372165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.372368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.372401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.372506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.372538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.372718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.372751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.372991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.373063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.373220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.373257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.373444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.373477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.373589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.373624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.373881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.373914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.374098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.374130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.374337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.374370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.374544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.374585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.374720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.374751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.374866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.374899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.375077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.375108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.375313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.375346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.375519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.375552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.397 [2024-12-12 10:40:38.375739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.397 [2024-12-12 10:40:38.375781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.397 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.375974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.376006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.376267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.376299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.376559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.376604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.376727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.376758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.376889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.376921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.377091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.377123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.377363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.377395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.377566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.377611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.377787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.377819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.377938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.377970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.378143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.378175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.378417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.378448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.378558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.378601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.378817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.378849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.379036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.379068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.379245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.379276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.379565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.379610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.379795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.379828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.380088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.380120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.380306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.380337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.380543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.380585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.380786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.380818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.381025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.381057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.381257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.381289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.381481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.381514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.381697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.381730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.381904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.381993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.382265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.382301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.382547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.382595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.382854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.382887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.383151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.383183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.383370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.383402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.383598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.383633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.383820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.383852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.384041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.384073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.384340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.384372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.384557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.398 [2024-12-12 10:40:38.384604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.398 qpair failed and we were unable to recover it. 00:27:04.398 [2024-12-12 10:40:38.384845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.384878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.385076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.385109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.385382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.385422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.385612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.385647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.385860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.385892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.386030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.386063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.386279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.386311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.386506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.386538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.386683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.386716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.386903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.386936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.387060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.387093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.387219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.387252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.387378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.387410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.387533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.387566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.387746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.387778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.387912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.387945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.388127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.388159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.388358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.388390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.388517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.388550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.388797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.388830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.389040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.389073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.389245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.389279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.399 [2024-12-12 10:40:38.389385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.399 [2024-12-12 10:40:38.389416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.399 qpair failed and we were unable to recover it. 00:27:04.679 [2024-12-12 10:40:38.389652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-12-12 10:40:38.389688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-12-12 10:40:38.389814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-12-12 10:40:38.389846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-12-12 10:40:38.390034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-12-12 10:40:38.390067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-12-12 10:40:38.390315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-12-12 10:40:38.390347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-12-12 10:40:38.390603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-12-12 10:40:38.390638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-12-12 10:40:38.390918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-12-12 10:40:38.390950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-12-12 10:40:38.391276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.391357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.391511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.391547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.391844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.391879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.392083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.392115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.392293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.392326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.392596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.392638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.392877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.392909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.393101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.393134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.393267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.393300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.393520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.393553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.393688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.393721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.393957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.393989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.394114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.394146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.394276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.394317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.394450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.394483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.394658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.394692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.394912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.394944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.395182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.395215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.395329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.395362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.395493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.395526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.395747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.395782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.395973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.396007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.396132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.396163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.396293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.396326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.396497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.396531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.396718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.396752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.396970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.397002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.397192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.397225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.397437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.397474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.397611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.397646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.397836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.397871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.398111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.398144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.398321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.398355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.398490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.398523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.398773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.398808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.399073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.399106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.399373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.399407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.399604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.399638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-12-12 10:40:38.399768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-12-12 10:40:38.399801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.400065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.400098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.400423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.400495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.400785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.400858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.401091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.401127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.401390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.401422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.401614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.401647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.401819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.401851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.402125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.402157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.402381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.402413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.402604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.402639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.402925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.402958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.403140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.403173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.403305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.403337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.403612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.403646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.403846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.403878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.404062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.404096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.404282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.404314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.404497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.404530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.404807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.404840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.405023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.405056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.405187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.405220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.405464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.405497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.405680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.405731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.405938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.405970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.406158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.406189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.406403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.406435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.406704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.406739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.406874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.406906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.407103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.407136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.407330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.407363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.407483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.407515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.407695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.407729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.407913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.407946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.408215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.408247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.408509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.408542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.408678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-12-12 10:40:38.408712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-12-12 10:40:38.408897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.408930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.409102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.409134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.409262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.409295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.409431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.409463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.409723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.409757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.409950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.409988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.410161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.410194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.410379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.410412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.410528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.410559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.410810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.410843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.411021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.411053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.411233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.411265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.411441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.411473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.411682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.411718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.411904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.411936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.412072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.412104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.412290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.412323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.412460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.412491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.412612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.412646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.412833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.412865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.413066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.413099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.413296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.413329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.413532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.413565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.413831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.413863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.414050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.414082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.414216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.414249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.414369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.414401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.414663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.414697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.414868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.414900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.415020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.415052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.415289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.415322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.415563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.415607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.415793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.415826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.416019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.416052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-12-12 10:40:38.416227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-12-12 10:40:38.416259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.416377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.416410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.416591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.416624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.416798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.416831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.417032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.417065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.417304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.417337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.417507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.417539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.417737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.417770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.417949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.417982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.418105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.418137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.418273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.418306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.418547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.418593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.418773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.418806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.418993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.419025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.419139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.419172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.419362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.419393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.419498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.419529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.419682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.419716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.419852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.419884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.420147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.420179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.420285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.420316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.420499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.420531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.420676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.420709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.420950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.420982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.421109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.421141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.421404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.421436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.421548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.421592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.421703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.421733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.421944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.421976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.422160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.422193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.422389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.422421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.422633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.422666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.422795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.422826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.422953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.422986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.423100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.423133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.683 qpair failed and we were unable to recover it. 00:27:04.683 [2024-12-12 10:40:38.423324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-12-12 10:40:38.423357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.423459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.423490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.423666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.423700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.423889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.423923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.424025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.424057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.424190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.424222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.424346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.424379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.424505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.424537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.424735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.424769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.424955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.424988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.425108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.425141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.425347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.425379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.425564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.425604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.425724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.425755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.425998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.426031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.426205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.426237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.426424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.426462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.426664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.426699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.426938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.426970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.427155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.427188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.427317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.427350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.427469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.427502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.427693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.427725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.427896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.427928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.428123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.428156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.428280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.428312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.428481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.428513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.428710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.428744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.428884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.428916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.429040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.684 [2024-12-12 10:40:38.429073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.684 qpair failed and we were unable to recover it. 00:27:04.684 [2024-12-12 10:40:38.429251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.429284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.429399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.429431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.429701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.429734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.429973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.430005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.430124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.430156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.430400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.430432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.430621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.430657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.430853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.430885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.431080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.431113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.431288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.431320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.431433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.431466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.431657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.431691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.431984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.432016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.432152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.432185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.432372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.432405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.432595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.432629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.432744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.432776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.432958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.432990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.433179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.433211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.433333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.433366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.433633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.433668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.433850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.433882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.433995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.434028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.434267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.434299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.434474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.434507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.434767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.434801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.435037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.435080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.435262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.435294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.435487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.435520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.435649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.435682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.435966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.435999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.436238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.436271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.436389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.436422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.436660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.436693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.436884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.685 [2024-12-12 10:40:38.436917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.685 qpair failed and we were unable to recover it. 00:27:04.685 [2024-12-12 10:40:38.437111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.437142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.437315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.437348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.437520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.437553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.437731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.437765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.437884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.437916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.438094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.438127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.438249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.438281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.438403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.438435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.438613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.438647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.438866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.438898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.439162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.439194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.439365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.439398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.439528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.439561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.439691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.439723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.439965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.439998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.440189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.440221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.440396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.440429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.440637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.440672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.440861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.440893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.441076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.441108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.441209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.441242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.441361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.441393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.441567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.441611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.441786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.441818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.441941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.441973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.442151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.442183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.442355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.442387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.442602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.442636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.442907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.442939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.443113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.443146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.443357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.443389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.443680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.443721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.443913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.443945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.444059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.444092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.444261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.686 [2024-12-12 10:40:38.444293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.686 qpair failed and we were unable to recover it. 00:27:04.686 [2024-12-12 10:40:38.444553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.444598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.444864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.444896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.445022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.445055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.445261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.445294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.445491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.445524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.445799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.445834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.445953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.445984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.446114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.446146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.446386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.446418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.446535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.446567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.446758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.446791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.447024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.447056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.447264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.447297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.447415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.447448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.447711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.447744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.447858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.447891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.448066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.448099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.448281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.448313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.448506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.448538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.448720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.448753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.448924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.448957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.449148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.449180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.449391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.449424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.449612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.449647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.449757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.449789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.450039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.450072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.450246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.450280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.450395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.450428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.450550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.450591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.450711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.450744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.451008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.451040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.451144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.451177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.451294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.451326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.451499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.451532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.451758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.687 [2024-12-12 10:40:38.451792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.687 qpair failed and we were unable to recover it. 00:27:04.687 [2024-12-12 10:40:38.451908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.451941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.452066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.452104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.452365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.452397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.452582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.452616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.452813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.452845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.453024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.453056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.453253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.453285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.453404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.453437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.453619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.453654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.453831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.453863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.453979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.454013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.454185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.454217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.454428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.454461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.454661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.454696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.454881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.454912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.455089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.455122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.455237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.455270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.455476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.455508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.455702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.455736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.455840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.455872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.456005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.456037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.456141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.456174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.456364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.456396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.456582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.456616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.456738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.456770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.456965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.456997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.457132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.457164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.457370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.457403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.457526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.457558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.457807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.457840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.458027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.458061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.458187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.458219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.458423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.458455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.458588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.458622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.458740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.688 [2024-12-12 10:40:38.458772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.688 qpair failed and we were unable to recover it. 00:27:04.688 [2024-12-12 10:40:38.458945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.458978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.459079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.459112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.459320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.459353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.459457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.459490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.459627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.459662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.459844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.459876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.460053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.460092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.460286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.460320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.460567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.460610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.460857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.460889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.461155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.461188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.461373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.461405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.461587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.461621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.461810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.461842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.462031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.462064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.462333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.462365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.462557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.462597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.462863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.462895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.463016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.463049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.463285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.463318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.463501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.463534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.463763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.463797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.464082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.464115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.464354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.464386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.464561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.464607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.464848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.464880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.464996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.465029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.465209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.465241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.465343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.465376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.465642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.465677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.689 qpair failed and we were unable to recover it. 00:27:04.689 [2024-12-12 10:40:38.465860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.689 [2024-12-12 10:40:38.465892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.466018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.466051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.466287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.466320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.466503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.466536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.466736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.466771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.466943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.466975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.467111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.467143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.467318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.467352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.467456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.467488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.467624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.467658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.467900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.467932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.468104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.468136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.468320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.468353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.468528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.468561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.468751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.468784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.468980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.469013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.469132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.469170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.469345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.469378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.469508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.469541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.469725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.469798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.470024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.470059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.470246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.470280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.470478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.470511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.470701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.470736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.470977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.471010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.471193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.471226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.471414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.471447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.471654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.471689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.471870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.471903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.472080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.472112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.472310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.472343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.472458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.472491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.472607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.472641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.472894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.472925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.473099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.473131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.473315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.690 [2024-12-12 10:40:38.473348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.690 qpair failed and we were unable to recover it. 00:27:04.690 [2024-12-12 10:40:38.473537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.473578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.473749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.473782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.473912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.473944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.474125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.474158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.474293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.474326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.474502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.474534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.474725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.474758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.474904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.474941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.475110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.475143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.475266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.475299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.475564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.475604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.475733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.475765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.476004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.476036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.476208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.476241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.476432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.476464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.476655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.476689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.476875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.476907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.477144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.477177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.477445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.477478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.477651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.477686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.477800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.477837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.478073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.478105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.478333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.478365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.478485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.478517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.478709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.478744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.478928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.478960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.479133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.479166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.479358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.479390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.479632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.479667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.479924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.479956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.480160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.480193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.480308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.480341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.480525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.480557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.480683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.480715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.691 [2024-12-12 10:40:38.480912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.691 [2024-12-12 10:40:38.480945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.691 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.481126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.481158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.481347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.481380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.481558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.481600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.481717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.481750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.481868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.481900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.482069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.482103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.482368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.482400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.482638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.482672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.482799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.482831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.483001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.483034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.483295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.483327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.483447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.483480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.483664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.483699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.483957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.483990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.484161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.484194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.484296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.484328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.484502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.484536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.484731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.484766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.484887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.484919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.485088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.485121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.485319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.485351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.485643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.485677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.485858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.485890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.486072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.486104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.486219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.486251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.486359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.486397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.486514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.486546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.486743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.486777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.487013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.487045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.487156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.487188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.487307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.487339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.487439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.487471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.487665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.487700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.487915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.487947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.488071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.488104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.692 qpair failed and we were unable to recover it. 00:27:04.692 [2024-12-12 10:40:38.488307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.692 [2024-12-12 10:40:38.488338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.488459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.488492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.488620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.488655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.488844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.488875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.489076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.489110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.489374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.489405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.489529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.489562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.489749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.489782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.489982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.490014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.490198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.490230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.490428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.490460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.490636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.490670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.490918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.490950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.491167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.491200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.491375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.491407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.491540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.491591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.491861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.491893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.492158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.492191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.492362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.492394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.492531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.492563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.492716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.492749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.492863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.492896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.493105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.493137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.493398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.493429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.493698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.493733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.493921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.493954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.494133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.494165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.494366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.494399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.494588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.494622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.494760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.494792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.494967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.495005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.495178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.495210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.495399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.495431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.495621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.495655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.495825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.693 [2024-12-12 10:40:38.495856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.693 qpair failed and we were unable to recover it. 00:27:04.693 [2024-12-12 10:40:38.496108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.496140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.496264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.496297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.496404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.496436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.496561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.496602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.496775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.496807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.496984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.497017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.497118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.497150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.497413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.497445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.497685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.497719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.497896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.497928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.498105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.498137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.498238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.498270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.498398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.498431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.498697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.498731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.498975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.499007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.499127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.499160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.499425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.499456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.499722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.499755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.499882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.499913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.500087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.500119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.500314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.500345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.500551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.500599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.500841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.500912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.501109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.501145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.501336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.501369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.501655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.501690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.501933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.501966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.502138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.502170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.502299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.502331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.502519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.502552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.502698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.694 [2024-12-12 10:40:38.502731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.694 qpair failed and we were unable to recover it. 00:27:04.694 [2024-12-12 10:40:38.502938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.502971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.503101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.503133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.503250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.503282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.503481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.503513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.503705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.503748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.504014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.504046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.504224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.504256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.504438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.504470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.504674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.504708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.504947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.504980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.505157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.505191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.505373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.505406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.505593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.505627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.505768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.505799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.505933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.505966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.506206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.506239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.506434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.506465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.506707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.506742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.506878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.506911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.507037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.507071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.507252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.507286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.507458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.507490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.507673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.507707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.507830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.507861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.508032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.508065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.508330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.508367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.508493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.508526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.508669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.508704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.508816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.508848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.508953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.508987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.509176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.509210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.509336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.509375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.509547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.509592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.509716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.509749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.695 [2024-12-12 10:40:38.509855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.695 [2024-12-12 10:40:38.509888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.695 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.510004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.510037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.510159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.510192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.510314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.510347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.510521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.510553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.510689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.510722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.510909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.510942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.511124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.511156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.511276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.511309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.511495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.511527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.511651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.511685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.511880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.511913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.512021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.512053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.512249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.512283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.512413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.512445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.512625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.512660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.512833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.512866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.513046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.513078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.513287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.513320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.513508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.513542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.513739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.513773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.513897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.513930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.514042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.514075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.514264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.514297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.514494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.514527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.514676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.514717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.514898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.514931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.515050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.515082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.515334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.515368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.515475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.515508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.515623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.515658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.515830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.515863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.516049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.516082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.516282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.516315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.516492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.516524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.516659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.516693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.696 qpair failed and we were unable to recover it. 00:27:04.696 [2024-12-12 10:40:38.516868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.696 [2024-12-12 10:40:38.516900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.517077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.517115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.517306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.517339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.517628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.517663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.517845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.517878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.518057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.518090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.518350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.518383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.518500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.518533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.518722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.518759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.518947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.518980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.519168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.519201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.519376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.519409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.519590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.519625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.519810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.519842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.520093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.520126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.520239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.520272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.520445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.520477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.520648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.520682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.520873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.520906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.521013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.521047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.521287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.521320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.521528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.521561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.521678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.521712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.521906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.521939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.522147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.522180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.522290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.522323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.522497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.522530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.522737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.522770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.522949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.522983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.523194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.523227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.523402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.523435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.523625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.523660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.523834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.523866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.524047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.524080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.524319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.697 [2024-12-12 10:40:38.524352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.697 qpair failed and we were unable to recover it. 00:27:04.697 [2024-12-12 10:40:38.524463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.524496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.524613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.524648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.524822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.524855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.524971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.525003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.525121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.525154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.525328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.525361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.525599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.525639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.525768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.525801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.525980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.526013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.526184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.526217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.526392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.526424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.526600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.526634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.526823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.526857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.527046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.527080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.527250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.527283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.527495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.527534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.527725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.527758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.527946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.527978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.528174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.528206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.528338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.528371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.528483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.528516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.528722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.528756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.528885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.528917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.529028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.529060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.529179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.529211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.529400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.529432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.529628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.529663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.529839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.529871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.529980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.530012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.530198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.530231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.530402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.530433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.530680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.530714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.530889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.530922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.531066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.531098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.531215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.698 [2024-12-12 10:40:38.531248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.698 qpair failed and we were unable to recover it. 00:27:04.698 [2024-12-12 10:40:38.531433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.531467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.531653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.531687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.531873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.531906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.532077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.532110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.532289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.532322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.532515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.532547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.532747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.532781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.532894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.532927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.533060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.533092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.533265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.533298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.533471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.533503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.533748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.533788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.533911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.533944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.534192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.534224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.534343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.534375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.534501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.534533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.534746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.534779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.534889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.534921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.535053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.535085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.535342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.535374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.535483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.535516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.535729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.535762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.535936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.535968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.536075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.536107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.536295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.536327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.536522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.536555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.536804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.536836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.537079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.537112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.537349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.537381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.537512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.537545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.699 qpair failed and we were unable to recover it. 00:27:04.699 [2024-12-12 10:40:38.537758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.699 [2024-12-12 10:40:38.537791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.537990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.538022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.538210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.538242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.538348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.538380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.538563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.538607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.538817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.538849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.539029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.539063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.539182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.539214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.539324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.539357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.539539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.539583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.539771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.539805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.539979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.540012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.540223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.540255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.540442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.540474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.540617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.540653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.540840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.540873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.541001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.541034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.541211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.541244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.541368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.541400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.541600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.541635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.541922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.541955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.542144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.542187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.542363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.542397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.542589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.542623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.542731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.542763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.542950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.542982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.543094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.543126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.543318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.543351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.543478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.543510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.543624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.543657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.543830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.543863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.543987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.544019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.544259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.544290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.544498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.544531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.544758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.700 [2024-12-12 10:40:38.544791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.700 qpair failed and we were unable to recover it. 00:27:04.700 [2024-12-12 10:40:38.544915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.544947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.545120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.545152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.545366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.545398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.545592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.545626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.545837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.545870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.546059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.546093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.546209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.546242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.546492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.546524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.546652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.546687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.546889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.546921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.547181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.547214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.547401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.547434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.547722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.547756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.547889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.547923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.548121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.548153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.548272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.548304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.548497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.548531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.548670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.548705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.548894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.548926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.549097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.549130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.549305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.549378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.549636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.549706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.549964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.550035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.550190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.550228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.550420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.550453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.550594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.550629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.550828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.550870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.551112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.551144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.551330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.551363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.551600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.551634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.551932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.551964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.552152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.552185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.552423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.552454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.552635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.701 [2024-12-12 10:40:38.552669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.701 qpair failed and we were unable to recover it. 00:27:04.701 [2024-12-12 10:40:38.552909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.552941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.553126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.553158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.553341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.553374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.553559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.553605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.553728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.553760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.553887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.553919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.554064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.554097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.554281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.554314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.554550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.554592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.554728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.554762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.554938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.554970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.555072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.555105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.555344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.555376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.555614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.555649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.555832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.555865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.556074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.556107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.556294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.556326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.556535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.556589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.556772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.556806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.556990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.557031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.557219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.557253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.557428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.557463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.557601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.557638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.557848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.557883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.558066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.558101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.558289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.558324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.558442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.558478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.558711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.558747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.558927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.558962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.559145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.559180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.559369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.559404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.559527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.559562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.559690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.559724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.560054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.560126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.702 qpair failed and we were unable to recover it. 00:27:04.702 [2024-12-12 10:40:38.560323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.702 [2024-12-12 10:40:38.560360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.560541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.560591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.560908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.560941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.561046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.561079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.561344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.561378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.561496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.561529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.561734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.561769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.562023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.562056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.562183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.562216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.562405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.562437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.562627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.562662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.562836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.562869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.562974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.563016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.563207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.563241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.563416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.563448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.563552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.563594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.563781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.563813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.563938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.563972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.564151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.564185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.564426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.564459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.564649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.564683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.564799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.564831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.565067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.565100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.565281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.565314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.565443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.565476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.565595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.565628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.565769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.565802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.566002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.566036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.566239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.566272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.566512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.566546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.566813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.566847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.567033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.567066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.567264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.567297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.567483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.567517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.703 [2024-12-12 10:40:38.567713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.703 [2024-12-12 10:40:38.567748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.703 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.567857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.567891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.568083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.568116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.568253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.568286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.568419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.568452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.568599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.568636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.568818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.568850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.568987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.569020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.569130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.569163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.569298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.569331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.569617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.569651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.569826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.569859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.569997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.570031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.570266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.570299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.570423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.570456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.570638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.570673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.570802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.570835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.570963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.570996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.571109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.571147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.571320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.571353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.571605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.571639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.571907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.571940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.572139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.572172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.572346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.572378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.572642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.572678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.572884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.572918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.573050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.573083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.573261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.573294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.573560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.573602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.704 [2024-12-12 10:40:38.573844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.704 [2024-12-12 10:40:38.573876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.704 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.574051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.574084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.574285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.574318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.574463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.574496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.574693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.574728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.574842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.574875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.575081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.575114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.575417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.575450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.575623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.575657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.575876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.575909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.576037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.576070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.576246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.576279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.576459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.576493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.576758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.576793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.576990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.577023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.577255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.577288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.577547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.577589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.577719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.577752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.578016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.578049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.578236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.578270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.578506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.578539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.578735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.578769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.578872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.578911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.579093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.579126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.579300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.579332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.579530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.579563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.579847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.579880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.580007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.580040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.580144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.580177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.580279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.580317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.580432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.580465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.580611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.580646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.580843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.580876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.581048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.581080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.581257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.705 [2024-12-12 10:40:38.581290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.705 qpair failed and we were unable to recover it. 00:27:04.705 [2024-12-12 10:40:38.581463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.581495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.581676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.581710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.581972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.582005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.582131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.582164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.582367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.582400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.582692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.582727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.582852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.582884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.583072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.583104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.583417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.583451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.583636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.583671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.583785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.583818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.584015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.584048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.584171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.584204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.584378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.584411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.584621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.584657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.584786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.584819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.585065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.585098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.585228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.585261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.585449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.585482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.585671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.585705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.585882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.585914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.586228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.586301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.586453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.586490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.586628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.586664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.586853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.586886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.587063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.587097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.587265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.587297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.587429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.587462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.587579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.587614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.587867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.587900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.588139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.588172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.588355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.588388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.588496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.588528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.706 qpair failed and we were unable to recover it. 00:27:04.706 [2024-12-12 10:40:38.588726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.706 [2024-12-12 10:40:38.588761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.588879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.588913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.589045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.589078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.589265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.589298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.589478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.589511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.589630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.589663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.589776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.589809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.589981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.590014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.590189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.590222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.590470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.590502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.590623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.590658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.590784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.590816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.591010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.591044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.591151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.591181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.591303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.591336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.591510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.591548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.591747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.591781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.591901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.591934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.592109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.592142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.592265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.592297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.592497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.592531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.592729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.592764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.592948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.592981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.593169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.593202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.593409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.593442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.593619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.593653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.593785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.593818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.594022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.594055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.594188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.594221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.707 [2024-12-12 10:40:38.594348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.707 [2024-12-12 10:40:38.594382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.707 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.594620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.594656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.594789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.594821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.595113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.595146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.595351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.595383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.595511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.595544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.595767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.595801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.596015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.596048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.596225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.596258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.596375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.596407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.596591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.596626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.596753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.596785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.596964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.596997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.597124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.597162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.597359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.597393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.597497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.597529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.597645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.597679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.597799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.597831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.597958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.597990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.598234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.598266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.598436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.598470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.598712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.598747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.598878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.598911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.599035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.708 [2024-12-12 10:40:38.599067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.708 qpair failed and we were unable to recover it. 00:27:04.708 [2024-12-12 10:40:38.599267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.599299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.599418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.599450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.599640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.599674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.599801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.599835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.600005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.600037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.600238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.600271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.600376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.600410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.600587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.600620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.600902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.600935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.601048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.601081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.601270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.601303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.601479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.601513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.601692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.601725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.601843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.601877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.601994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.602026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.602158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.602191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.602358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.602392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.602520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.602554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.602745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.602778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.602897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.602930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.603058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.603091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.603219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.603251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.603374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.603406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.709 [2024-12-12 10:40:38.603590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.709 [2024-12-12 10:40:38.603624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.709 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.603815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.603851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.604047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.604081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.604268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.604301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.604565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.604622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.604813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.604847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.605030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.605063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.605205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.605239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.605378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.605410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.605594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.605629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.605821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.605854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.605975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.606008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.606267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.606301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.606483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.606516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.606631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.606666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.606848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.606880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.607055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.607088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.607192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.607225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.607395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.710 [2024-12-12 10:40:38.607428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.710 qpair failed and we were unable to recover it. 00:27:04.710 [2024-12-12 10:40:38.607605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.607640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.607926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.607958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.608203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.608236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.608492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.608526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.608827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.608861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.608979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.609012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.609215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.609248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.609421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.609455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.609592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.609627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.609817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.609851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.609982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.610015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.610198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.610232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.610408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.610443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.610635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.610671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.610795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.610829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.610998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.611036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.611218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.611251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.611526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.611559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.611756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.611789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.612053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.612085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.612209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.612243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.612450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.612482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.612711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.711 [2024-12-12 10:40:38.612746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.711 qpair failed and we were unable to recover it. 00:27:04.711 [2024-12-12 10:40:38.612883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.612915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.613036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.613069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.613251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.613284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.613408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.613441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.613726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.613760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.613938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.613971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.614148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.614181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.614318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.614351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.614469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.614502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.614622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.614656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.614852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.614885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.615061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.615094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.615202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.615235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.615347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.615380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.615481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.615514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.615790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.615824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.616008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.616041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.616282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.616315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.616527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.616560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.616850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.616889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.616995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.617028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.617291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.617324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.617516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.617550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.712 qpair failed and we were unable to recover it. 00:27:04.712 [2024-12-12 10:40:38.617692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.712 [2024-12-12 10:40:38.617725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.617838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.617871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.618041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.618073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.618276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.618308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.618525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.618558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.618742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.618775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.618880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.618913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.619114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.619146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.619272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.619305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.619423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.619455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.619673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.619708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.619833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.619866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.620056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.620088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.620351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.620384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.620559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.620610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.620729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.620761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.620901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.620934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.621129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.621161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.621401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.621433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.621609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.621644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.621885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.621918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.622129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.622162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.622430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.622462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.713 [2024-12-12 10:40:38.622727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.713 [2024-12-12 10:40:38.622767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.713 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.622889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.622921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.623112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.623144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.623407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.623440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.623679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.623714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.623886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.623918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.624093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.624126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.624328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.624360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.624481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.624514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.624712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.624746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.624933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.624965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.625146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.625179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.625290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.625323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.625556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.625597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.625731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.625765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.625972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.626004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.626173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.626205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.626469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.626502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.626641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.626676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.626860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.626892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.626997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.627030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.627268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.627300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.627558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.627603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.627736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.627769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.628008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.628040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.628235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.628267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.628439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.628472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.628621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.628656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.714 [2024-12-12 10:40:38.628858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.714 [2024-12-12 10:40:38.628891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.714 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.629015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.629047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.629223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.629255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.629490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.629523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.629667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.629701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.629877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.629909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.630121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.630154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.630288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.630322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.630556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.630598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.630784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.630817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.630935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.630967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.631136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.631168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.631340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.631373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.631558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.631601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.631773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.631805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.631992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.632025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.632203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.632237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.632344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.632376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.632589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.632624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.632824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.632857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.632974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.633005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.633142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.633175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.633313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.633346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.633516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.633546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.633754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.633789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.633981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.634014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.634195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.634226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.634471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.634504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.634627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.634662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.634840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.634872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.634990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.635023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.635193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.635226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.635491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.635524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.635731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.635764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.715 [2024-12-12 10:40:38.635887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.715 [2024-12-12 10:40:38.635920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.715 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.636209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.636241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.636356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.636388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.636582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.636621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.636864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.636897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.637110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.637143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.637252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.637290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.637476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.637509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.637722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.637762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.637867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.637899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.638069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.638101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.638286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.638319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.638558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.638602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.638772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.638805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.638991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.639024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.639219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.639252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.639430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.639463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.639587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.639622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.639738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.639770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.639948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.639980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.640111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.640144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.640276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.640308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.640443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.640476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.640691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.640726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.640830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.640862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.640978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.641011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.641246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.641279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.641391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.641423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.641566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.641619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.641885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.641917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.642045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.642077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.642205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.642238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.642375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.716 [2024-12-12 10:40:38.642407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.716 qpair failed and we were unable to recover it. 00:27:04.716 [2024-12-12 10:40:38.642598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.642639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.642900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.642932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.643123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.643155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.643401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.643434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.643743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.643777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.643965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.643998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.644187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.644220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.644325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.644358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.644549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.644590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.644861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.644893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.645015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.645047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.645225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.645257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.645438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.645472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.645591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.645625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.645874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.645907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.646085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.646117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.646288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.646320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.646555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.646611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.646739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.646772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.646969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.647001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.647183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.647217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.647391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.647424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.647610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.647644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.647816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.647848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.648018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.648051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.648241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.648273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.648406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.648438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.648561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.648602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.648808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.648841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.649037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.649070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.649264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.717 [2024-12-12 10:40:38.649297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.717 qpair failed and we were unable to recover it. 00:27:04.717 [2024-12-12 10:40:38.649508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.649540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.649745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.649779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.649970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.650002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.650184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.650218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.650333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.650365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.650487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.650518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.650746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.650780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.650972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.651004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.651121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.651153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.651323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.651357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.651622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.651657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.651829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.651861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.652046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.652079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.652217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.652251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.652384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.652416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.652602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.652636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.652813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.652846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.653025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.653057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.653168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.653200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.653478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.653511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.653655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.653688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.653964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.653998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.654264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.654296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.654481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.654514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.654717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.654751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.654869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.654901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.655083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.655116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.655322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.655355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.655473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.655505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.655630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.655664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.655874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.655907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.656089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.656123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.656406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.656439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.656625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.656660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.656841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.718 [2024-12-12 10:40:38.656872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.718 qpair failed and we were unable to recover it. 00:27:04.718 [2024-12-12 10:40:38.656993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.657026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.657127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.657159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.657350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.657387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.657626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.657660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.657831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.657863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.658078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.658111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.658368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.658401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.658602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.658637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.658884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.658917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.659026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.659058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.659265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.659297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.659486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.659519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.659701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.659735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.659853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.659886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.660057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.660089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.660294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.660326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.660521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.660554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.660849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.660883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.661063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.661095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.661274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.661306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.661485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.661518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.661700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.661734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.661907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.661940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.662115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.662148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.662390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.662422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.662556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.662598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.662854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.662888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.663063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.663095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.663225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.663257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.663449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.663487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.663601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.663636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.663840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.663872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.664050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.664082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.664210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.664243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.719 [2024-12-12 10:40:38.664350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.719 [2024-12-12 10:40:38.664379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.719 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.664617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.664651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.664847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.664879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.665055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.665087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.665348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.665381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.665553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.665595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.665769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.665801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.665924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.665956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.666138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.666170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.666352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.666385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.666516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.666548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.666767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.666801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.667069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.667102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.667231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.667263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.667432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.667465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.667724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.667759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.667960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.667992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.668188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.668221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.668469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.668502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.668698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.668733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.668846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.668878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.669063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.669096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.669310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.669348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.669524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.669557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.669803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.669837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.669945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.669977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.670119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.670151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.670254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.670286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.670454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.670487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.670661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.670695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.670896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.670928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.671102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.671135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.671321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.671353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.671612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.671646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.671840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.671872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.672134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.720 [2024-12-12 10:40:38.672167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:04.720 qpair failed and we were unable to recover it. 00:27:04.720 [2024-12-12 10:40:38.672347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.672418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.672593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.672632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.672875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.672910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.673200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.673233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.673420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.673454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.673693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.673733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.673840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.673873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.674105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.674138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.674327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.674359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.674465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.674500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.674677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.674711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.674880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.674913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.675103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.675136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.675317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.675360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.675554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.675597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.675812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.675844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.676028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.676061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.676160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.676192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.676422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.676455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.676631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.676666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.676803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.676834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.677033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.677066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.677243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.677275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.677485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.677518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.677736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.677770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.677978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.678010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.678223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.678255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.678431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.678465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.678724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.678759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.678949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.678982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.679170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.679202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.679462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.679494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.679618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.679652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.679916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.679950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.721 qpair failed and we were unable to recover it. 00:27:04.721 [2024-12-12 10:40:38.680198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.721 [2024-12-12 10:40:38.680230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.722 qpair failed and we were unable to recover it. 00:27:04.722 [2024-12-12 10:40:38.680339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-12 10:40:38.680371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.722 qpair failed and we were unable to recover it. 00:27:04.722 [2024-12-12 10:40:38.680482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-12 10:40:38.680515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.722 qpair failed and we were unable to recover it. 00:27:04.722 [2024-12-12 10:40:38.680788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-12 10:40:38.680822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.722 qpair failed and we were unable to recover it. 00:27:04.722 [2024-12-12 10:40:38.681013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-12 10:40:38.681046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.722 qpair failed and we were unable to recover it. 00:27:04.722 [2024-12-12 10:40:38.681281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-12 10:40:38.681315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:04.722 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.681557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.681602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.681772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.681804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.681996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.682028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.682269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.682302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.682435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.682467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.682714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.682749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.682998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.683031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.683163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.683195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.683370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.683402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.683668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-12-12 10:40:38.683702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-12-12 10:40:38.683884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.683915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.684090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.684123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.684307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.684340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.684481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.684519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.684711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.684745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.684934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.684967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.685205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.685238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.685351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.685383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.685556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.685598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.685870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.685903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.686173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.686205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.686380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.686412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.686517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.686549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.686693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.686726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.686934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.686967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.687087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.687119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.687357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.687390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.687514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.687547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.687734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.687768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.687968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.688000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.688186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.688218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.688342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.688376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.688642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.688676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.688863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.688896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.689011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.689042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.689158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.689191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.689458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.689491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.689754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.689788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.689924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.689957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.690216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.690249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.690514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.690546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.690815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.690848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.691083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.691115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.691349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.691382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.691505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.691538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.691720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.691754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.692000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.692033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-12-12 10:40:38.692149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-12-12 10:40:38.692182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.692380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.692413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.692592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.692625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.692757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.692790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.692976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.693008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.693129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.693161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.693334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.693372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.693498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.693531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.693805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.693839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.694106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.694139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.694373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.694405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.694610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.694645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.694851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.694885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.694991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.695024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.695259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.695292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.695497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.695529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.695720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.695754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.695929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.695962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.696207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.696240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.696440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.696472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.696648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.696682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.696801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.696833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.697012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.697045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.697308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.697342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.697524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.697557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.697759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.697793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.697965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.697998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.698252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.698284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.698490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.698523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.698725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.698758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.698864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.698897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.699082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.699115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.699303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.699337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.699557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.699600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.699781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.699814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.700000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.700034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.700293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.700325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-12-12 10:40:38.700451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-12-12 10:40:38.700483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.700616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.700650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.700780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.700812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.701028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.701060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.701248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.701281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.701534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.701567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.701781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.701813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.702099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.702131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.702366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.702398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.702636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.702675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.702890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.702922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.703182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.703215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.703341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.703373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.703491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.703524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.703746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.703779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.704032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.704065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.704305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.704338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.704585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.704619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.704856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.704889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.705081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.705114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.705290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.705322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.705504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.705536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.705753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.705788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.705994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.706027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.706198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.706231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.706489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.706522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.706740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.706775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.706962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.706995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.707135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.707167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.707296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.707329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.707445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.707477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.707650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.707684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.707800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.707833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.708035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.708068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.708336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.708379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.708510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.708542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.708716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-12-12 10:40:38.708750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-12-12 10:40:38.708963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.708996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.709124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.709157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.709347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.709380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.709643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.709677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.709782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.709812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.709932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.709964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.710069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.710100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.710276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.710308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.710444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.710476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.710595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.710629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.710876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.710908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.711144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.711177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.711295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.711339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.711592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.711626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.711797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.711829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.712002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.712034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.712218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.712249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.712455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.712488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.712613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.712647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.712841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.712873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.713063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.713094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.713204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.713236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.713370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.713403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.713516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.713549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.713816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.713850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.713966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.713998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.714264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-12-12 10:40:38.714297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-12-12 10:40:38.714532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.714565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.714743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.714777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.714895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.714928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.715126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.715157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.715274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.715307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.715549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.715593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.715862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.715895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.716066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.716099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.716285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.716317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.716521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.716553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.716844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.716877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.717006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.717039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.717214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.717286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.717487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.717523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.717747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.717781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.717974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.718006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.718246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.718279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.718551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.718593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.718847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.718879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.719069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.719101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.719295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-12-12 10:40:38.719327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-12-12 10:40:38.719446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.719480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.719607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.719642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.719813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.719846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.720057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.720090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.720279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.720313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.720502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.720535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.720730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.720764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.720964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.720997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.721281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.721313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.721517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.721550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.721800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.721833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.722021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.722054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.722182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.722214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.722338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.722370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.722539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.722580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.722700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.722734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.722997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.723029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.723218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.723251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.723432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.723471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.723715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.723750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.723934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-12-12 10:40:38.723966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-12-12 10:40:38.724212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.724245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.724414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.724447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.724563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.724607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.724788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.724822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.724954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.724986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.725248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.725281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.725479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.725512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.725711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.725745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.725937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.725971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.726098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.726131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.726306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.726339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.726456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.726488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.726607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.726642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.726881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.726912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.727046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.727079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.727204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.727238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.727463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.727496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.727693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.727728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.727968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.728000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.728216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.728249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.728426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.728460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.728703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.728737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.728932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.728965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-12-12 10:40:38.729158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-12-12 10:40:38.729192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.729388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.729426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.729649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.729684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.729972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.730004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.730199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.730232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.730411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.730444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.730560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.730601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.730873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.730905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.731161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.731194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.731380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.731414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.731594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.731628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.731746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.731778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.731979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.732012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.732202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.732234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.732363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.732397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.732591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.732624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.732875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.732908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.733035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.733068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.733258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.733291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.733576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.733611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.733779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.733812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.734002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.734035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.734238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.734270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-12-12 10:40:38.734451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-12-12 10:40:38.734483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.734652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.734686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.734948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.734981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.735175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.735207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.735483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.735516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.735646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.735681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.735864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.735897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.736096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.736130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.736411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.736443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.736723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.736760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.737003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.737035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.737219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.737252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.737435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.737468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.737652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.737688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.737871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.737903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.738074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.738107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.738293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.738325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.738524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.738557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.738697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.738732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.738940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.738973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.739165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.739196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.739458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.739491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.739717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.739751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.739939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-12-12 10:40:38.739971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-12-12 10:40:38.740163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.740196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.740385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.740417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.740677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.740711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.740916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.740948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.741080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.741113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.741286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.741319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.741591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.741625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.741728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.741761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.741984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.742016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.742232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.742266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.742516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.742549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.742745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.742779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.743040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.743073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.743261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.743294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.743553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.743592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.743771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.743805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.744067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.744099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.744213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.744246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.744430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.744462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.744638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.744672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.744946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.744979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.745244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.745278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.745465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.745503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.745678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.745714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.745972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.746007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.746138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.746171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-12-12 10:40:38.746434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-12-12 10:40:38.746467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.746640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.746674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.746847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.746880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.747142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.747175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.747414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.747447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.747671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.747706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.747827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.747860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.748123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.748155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.748401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.748433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.748707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.748742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.748983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.749016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.749257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.749290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.749427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.749461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.749647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.749681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.749813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.749846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.750111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.750145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.750432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.750465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.750656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.750691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.750880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.750913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.751133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.751164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.751352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.751384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.751555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.751596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.751861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.751894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.752158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.752197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.752313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.752345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.752565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.752605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.752843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.752875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.753126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.753158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.012 qpair failed and we were unable to recover it. 00:27:05.012 [2024-12-12 10:40:38.753407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.012 [2024-12-12 10:40:38.753440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.753640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.753674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.753879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.753910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.754099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.754132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.754392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.754424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.754630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.754662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.754833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.754864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.755104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.755137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.755343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.755375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.755645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.755680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.755859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.755892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.756062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.756096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.756375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.756409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.756676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.756710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.756963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.756995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.757285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.757317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.757592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.757627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.757867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.757899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.758028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.758061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.758234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.758268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.758527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.758560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.758805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.758837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.758971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.759010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.759216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.759247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.759531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.759564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.759750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.013 [2024-12-12 10:40:38.759795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.013 qpair failed and we were unable to recover it. 00:27:05.013 [2024-12-12 10:40:38.760048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.760080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.760366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.760399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.760610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.760644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.760797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.760829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.761004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.761037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.761276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.761308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.761593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.761628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.761869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.761902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.762146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.762179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.762439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.762472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.762667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.762703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.762840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.762872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.763021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.763055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.763305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.763338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.763552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.763593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.763860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.763894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.764152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.764185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.764305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.764337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.764565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.764622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.764885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.764918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.765160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.765192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.765454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.765487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.765736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.765770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.766030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.766062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.766340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.766373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.766656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.766691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.766952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.766984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.767271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.767303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.767584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.014 [2024-12-12 10:40:38.767618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.014 qpair failed and we were unable to recover it. 00:27:05.014 [2024-12-12 10:40:38.767821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.767852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.768030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.768063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.768250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.768283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.768479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.768512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.768807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.768843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.769124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.769155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.769417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.769450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.769737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.769790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.770042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.770075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.770261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.770294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.770478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.770511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.770782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.770816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.770949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.770981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.771242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.771274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.771563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.771603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.771870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.771903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.772184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.772216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.772494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.772527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.772805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.772839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.773033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.773067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.773202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.773235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.773410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.773443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.773715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.773749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.774016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.774049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.774290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.774324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.774510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.774544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.774743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.774776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.775035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.775067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.775355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.775388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.775660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.775694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.775963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.775996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.776280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.776314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.776430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.776463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.776673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.776707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.776909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.776941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.777185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.777222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.015 [2024-12-12 10:40:38.777363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.015 [2024-12-12 10:40:38.777396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.015 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.777655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.777690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.777972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.778004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.778178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.778211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.778466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.778500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.778788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.778823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.779092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.779124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.779307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.779339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.779545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.779584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.779868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.779901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.780168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.780219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.780360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.780392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.780676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.780710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.780839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.780872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.781159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.781192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.781383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.781416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.781591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.781626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.781892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.781924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.782112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.782144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.782329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.782362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.782626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.782661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.782948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.782980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.783166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.783198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.783466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.783498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.783705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.783740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.783925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.783957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.784217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.784256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.784473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.784505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.784699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.784732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.785038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.785071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.785341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.785373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.785559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.785602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.785711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.785743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.785964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.785996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.786265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.786298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.786589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.786622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.786893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.786926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.787130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.787163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.787404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.016 [2024-12-12 10:40:38.787437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.016 qpair failed and we were unable to recover it. 00:27:05.016 [2024-12-12 10:40:38.787676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.787711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.787903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.787935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.788122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.788155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.788422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.788455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.788742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.788776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.789042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.789075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.789367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.789400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.789666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.789701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.789992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.790025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.790308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.790340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.790608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.790641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.790928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.790960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.791178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.791210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.791477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.791511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.791759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.791793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.791985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.792017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.792138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.792171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.792294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.792327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.792600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.792634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.792809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.792843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.793034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.793068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.793254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.793287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.793406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.793438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.793629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.793663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.793915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.793948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.794134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.794168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.794384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.794417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.794590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.794624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.794872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.794906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.795194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.795228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.795493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.795525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.795812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.795846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.796030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.796062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.796317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.796350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.796608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.796644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.796934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.796967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.017 qpair failed and we were unable to recover it. 00:27:05.017 [2024-12-12 10:40:38.797243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.017 [2024-12-12 10:40:38.797275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.797543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.797583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.797705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.797737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.797937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.797969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.798173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.798206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.798496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.798528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.798720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.798754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.799042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.799076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.799261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.799293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.799536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.799577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.799868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.799902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.800117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.800149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.800356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.800389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.800642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.800677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.800934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.800968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.801160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.801193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.801434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.801467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.801648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.801682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.801924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.801957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.802212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.802249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.802442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.802475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.802662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.802697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.802960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.802993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.803178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.803211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.803408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.803441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.803710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.803744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.804021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.804058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.804280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.804315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.804446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.804478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.804676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.804712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.804888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.804920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.805183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.805216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.018 [2024-12-12 10:40:38.805485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.018 [2024-12-12 10:40:38.805518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.018 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.805737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.805771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.805924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.805958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.806224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.806257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.806456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.806488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.806637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.806672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.806873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.806907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.807167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.807200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.807399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.807433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.807675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.807710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.807915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.807948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.808083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.808116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.808305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.808338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.808607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.808642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.808821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.808860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.809055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.809089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.809288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.809321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.809562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.809603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.809847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.809880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.810070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.810103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.810391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.810425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.810696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.810733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.810935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.810969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.811236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.811269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.811450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.811483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.811676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.811711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.811889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.811922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.812215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.812248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.812385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.812419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.812665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.812701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.812920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.812954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.813195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.813228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.813425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.813458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.813750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.813784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.813959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.813992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.814243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.814277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.814469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.814503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.019 qpair failed and we were unable to recover it. 00:27:05.019 [2024-12-12 10:40:38.814771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.019 [2024-12-12 10:40:38.814806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.815076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.815109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.815398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.815431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.815696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.815729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.815941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.815981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.816223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.816258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.816444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.816476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.816761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.816795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.817045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.817079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.817371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.817404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.817670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.817705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.817890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.817924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.818112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.818145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.818412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.818445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.818633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.818669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.818928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.818962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.819172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.819205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.819476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.819509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.819867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.819942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.820170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.820208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.820482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.820517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.820728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.820763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.820954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.820988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.821210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.821244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.821435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.821468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.821738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.821774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.821912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.821945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.822189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.822253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.822521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.822554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.822771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.822805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.822999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.823032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.823220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.823263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.823511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.823546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.823818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.823859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.824110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.824144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.824415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.824448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.824653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.824689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.020 [2024-12-12 10:40:38.824932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.020 [2024-12-12 10:40:38.824966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.020 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.825261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.825295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.825597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.825633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.825757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.825792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.825974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.826007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.826258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.826292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.826565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.826608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.826799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.826833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.827089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.827122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.827408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.827443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.827734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.827768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.827965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.827999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.828241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.828276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.828413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.828446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.828736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.828771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.829035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.829069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.829271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c290f0 is same with the state(6) to be set 00:27:05.021 [2024-12-12 10:40:38.829539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.829583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.829728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.829761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.829948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.829981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.830284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.830317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.830461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.830494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.830762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.830797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.830978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.831013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.831283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.831316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.831591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.831627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.831821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.831854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.832037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.832070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.832244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.832277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.832468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.832501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.832773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.832809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.832997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.833031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.833208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.833240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.833438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.833472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.833665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.833700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.833963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.834001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.834307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.834341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.834533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.834566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.834864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.021 [2024-12-12 10:40:38.834898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.021 qpair failed and we were unable to recover it. 00:27:05.021 [2024-12-12 10:40:38.835161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.835194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.835329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.835362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.835589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.835624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.835821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.835854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.836139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.836172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.836489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.836522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.836801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.836836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.837020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.837053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.837250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.837283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.837527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.837560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.837865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.837899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.838086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.838118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.838312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.838345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.838636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.838671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.838858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.838891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.839076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.839110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.839295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.839329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.839443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.839476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.839739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.839774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.840064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.840098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.840287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.840319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.840607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.840641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.840778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.840811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.841102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.841140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.841404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.841438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.841651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.841686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.841867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.841900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.842094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.842128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.842393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.842426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.842606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.842642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.842890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.842923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.843096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.843129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.843318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.843351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.843617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.843651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.843846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.843880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.844063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.844096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.844279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.844311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.022 qpair failed and we were unable to recover it. 00:27:05.022 [2024-12-12 10:40:38.844604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.022 [2024-12-12 10:40:38.844640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.844906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.844939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.845155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.845189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.845428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.845461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.845650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.845685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.845979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.846011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.846126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.846159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.846375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.846408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.846665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.846699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.846888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.846921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.847211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.847246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.847539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.847580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.847789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.847822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.848092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.848131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.848315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.848348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.848608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.848642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.848833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.848866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.849130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.849163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.849373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.849405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.849653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.849687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.849955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.849987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.850176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.850210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.850470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.850503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.850678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.850714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.850912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.850944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.851137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.851171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.851389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.851422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.851620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.851655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.851850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.851883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.852154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.852188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.852375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.852409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.852621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.852655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.023 [2024-12-12 10:40:38.852833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.023 [2024-12-12 10:40:38.852866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.023 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.853060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.853094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.853282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.853314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.853632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.853667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.853908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.853941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.854139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.854173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.854436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.854468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.854661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.854696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.854807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.854837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.855109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.855143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.855416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.855449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.855727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.855763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.855958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.855992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.856261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.856294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.856602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.856638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.856835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.856869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.857138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.857172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.857365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.857398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.857590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.857626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.857819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.857853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.858124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.858157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.858378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.858412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.858691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.858726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.858924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.858958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.859137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.859171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.859435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.859468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.859669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.859705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.859974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.860008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.860183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.860218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.860418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.860451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.860652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.860688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.860900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.860935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.861071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.861104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.861300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.861334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.861524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.861556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.861778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.861811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.862088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.862122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.862401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.024 [2024-12-12 10:40:38.862437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.024 qpair failed and we were unable to recover it. 00:27:05.024 [2024-12-12 10:40:38.862674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.862708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.862922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.862955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.863232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.863268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.863448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.863482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.863664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.863700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.863968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.864003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.864128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.864162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.864366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.864399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.864536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.864615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.864864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.864898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.865028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.865061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.865313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.865352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.865642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.865678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.865971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.866007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.866293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.866326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.866552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.866594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.866795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.866829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.867050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.867084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.867304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.867338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.867616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.867651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.867906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.867939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.868092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.868127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.868375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.868410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.868658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.868694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.868944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.868977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.869208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.869241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.869506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.869538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.869771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.869808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.870034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.870068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.870299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.870332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.870534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.870568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.870769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.870802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.871070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.871103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.871365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.871399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.871596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.871632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.871877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.871911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.872117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.872149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.025 [2024-12-12 10:40:38.872327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.025 [2024-12-12 10:40:38.872360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.025 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.872494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.872534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.872807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.872841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.873038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.873070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.873348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.873382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.873591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.873626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.873874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.873908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.874096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.874130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.874342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.874375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.874583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.874618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.874795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.874831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.875098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.875132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.875420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.875455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.875720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.875756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.875949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.875984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.876189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.876224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.876410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.876442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.876721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.876757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.876901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.876935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.877142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.877177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.877434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.877467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.877685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.877721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.877986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.878020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.878288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.878322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.878623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.878658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.878850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.878885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.879146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.879181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.879459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.879494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.879778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.879814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.880089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.880124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.880423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.880457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.880678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.880713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.880987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.881023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.881301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.881335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.881623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.881657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.881791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.881825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.882072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.882107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.882449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.026 [2024-12-12 10:40:38.882484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.026 qpair failed and we were unable to recover it. 00:27:05.026 [2024-12-12 10:40:38.882708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.882744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.882939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.882972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.883244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.883278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.883479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.883514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.883737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.883773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.884003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.884037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.884289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.884324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.884605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.884641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.884941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.884977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.885167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.885201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.885343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.885377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.885697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.885733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.886018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.886052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.886194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.886229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.886512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.886546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.886770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.886806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.886949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.886983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.887169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.887203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.887491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.887525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.887730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.887767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.887987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.888022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.888241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.888277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.888530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.888566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.888790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.888826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.889106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.889140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.889430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.889464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.889736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.889772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.889914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.889948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.890134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.890169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.890428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.890462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.890743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.890780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.890968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.891010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.891214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.891249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.891525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.891561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.891850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.891888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.892022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.892055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.892362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.892397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.027 [2024-12-12 10:40:38.892657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.027 [2024-12-12 10:40:38.892693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.027 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.892907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.892941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.893136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.893171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.893394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.893428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.893817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.893857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.894005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.894040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.894191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.894226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.894503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.894538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.894749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.894787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.894973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.895006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.895247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.895281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.895589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.895624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.895815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.895850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.896067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.896102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.896419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.896454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.896752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.896787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.897082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.897116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.897408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.897445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.897723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.897758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.898042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.898079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.898324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.898359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.898542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.898592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.898803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.898838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.899046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.899080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.899290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.899325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.899469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.899503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.899750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.899786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.900086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.900120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.900351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.900386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.900643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.900679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.900904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.900939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.901096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.901130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.901329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.028 [2024-12-12 10:40:38.901362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.028 qpair failed and we were unable to recover it. 00:27:05.028 [2024-12-12 10:40:38.901500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.901534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.901745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.901782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.901975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.902010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.902252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.902287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.902423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.902459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.902714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.902750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.902949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.902984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.903140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.903176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.903363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.903397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.903601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.903635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.903918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.903952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.904145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.904184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.904335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.904369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.904630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.904667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.904856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.904890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.905163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.905204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.905489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.905526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.905740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.905776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.905907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.905941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.906147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.906182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.906322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.906358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.906597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.906634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.906856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.906891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.907098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.907132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.907433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.907468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.907756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.907792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.908063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.908097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.908317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.908352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.908607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.908642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.908890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.908926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.909209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.909243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.909428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.909463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.909736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.909772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.910048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.910083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.910306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.910340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.910469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.910503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.910712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.910751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.910979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.029 [2024-12-12 10:40:38.911014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.029 qpair failed and we were unable to recover it. 00:27:05.029 [2024-12-12 10:40:38.911158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.911193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.911465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.911499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.911756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.911793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.911983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.912018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.912270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.912305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.912589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.912625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.912856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.912892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.913097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.913131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.913387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.913421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.913695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.913732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.913888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.913923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.914061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.914095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.914302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.914338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.914613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.914648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.914843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.914878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.915064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.915099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.915351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.915385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.915546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.915592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.915786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.915821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.916031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.916067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.916254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.916289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.916543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.916587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.916821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.916856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.917010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.917044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.917269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.917304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.917590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.917626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.917823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.917859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.918061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.918095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.918222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.918256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.918402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.918436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.918699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.918736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.918940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.918975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.919189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.919224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.919482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.919517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.919696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.919732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.919920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.919955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.920141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.920175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.030 [2024-12-12 10:40:38.920462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.030 [2024-12-12 10:40:38.920497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.030 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.920754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.920791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.921069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.921104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.921217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.921253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.921455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.921490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.921636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.921672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.921898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.921933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.922144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.922179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.922398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.922438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.922651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.922688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.922940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.922976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.923113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.923148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.923399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.923435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.923645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.923681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.923886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.923920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.924115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.924149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.924347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.924382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.924645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.924681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.924963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.924999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.925307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.925342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.925566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.925610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.925845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.925880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.926023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.926058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.926197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.926232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.926507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.926542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.926779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.926815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.926947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.926981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.927180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.927214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.927431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.927465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.927711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.927850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.927886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.927997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.928031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.928173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.928207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.928511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.928546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.928775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.928810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.928995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.929035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.929257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.929291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.929495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.031 [2024-12-12 10:40:38.929529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.031 qpair failed and we were unable to recover it. 00:27:05.031 [2024-12-12 10:40:38.929764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.929800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.929951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.929985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.930171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.930205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.930359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.930394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.930672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.930708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.930860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.930894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.931124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.931158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.931370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.931404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.931599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.931635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.931793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.931829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.932056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.932090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.932309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.932344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.932544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.932585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.932730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.932765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.932960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.932995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.933184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.933218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.933412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.933445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.933706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.933743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.933885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.933921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.934173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.934207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.934437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.934471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.934667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.934702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.934897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.934931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.935148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.935181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.935448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.935483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.935741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.935778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.935901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.935935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.936194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.936229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.936485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.936520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.936737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.936773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.936932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.936967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.937099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.937134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.937273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.937306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.937560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.032 [2024-12-12 10:40:38.937608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.032 qpair failed and we were unable to recover it. 00:27:05.032 [2024-12-12 10:40:38.937727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.937760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.938054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.938089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.938361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.938396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.938621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.938657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.938820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.938856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.939064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.939100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.939316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.939351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.939478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.939513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.939813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.939850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.940130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.940165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.940376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.940410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.940717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.940753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.941025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.941059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.941208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.941243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.941436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.941472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.941704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.941741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.942020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.942055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.942343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.942379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.942598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.942634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.942771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.942805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.943059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.943094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.943345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.943379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.943650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.943686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.943834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.943869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.944068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.944102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.944421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.944456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.944666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.944703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.944912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.944948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.945083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.945117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.945366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.945400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.945653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.945689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.945919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.945958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.946158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.946192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.946389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.946424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.946633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.033 [2024-12-12 10:40:38.946668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.033 qpair failed and we were unable to recover it. 00:27:05.033 [2024-12-12 10:40:38.946870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.946904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.947099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.947134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.947406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.947441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.947591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.947627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.947767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.947802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.947991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.948026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.948267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.948301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.948454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.948488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.948694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.948731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.948936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.948971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.949304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.949340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.949546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.949592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.949839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.949874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.950096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.950131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.950407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.950442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.950716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.950752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.950946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.950980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.951133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.951168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.951369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.951404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.951602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.951638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.951918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.951953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.952221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.952256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.952555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.952604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.952814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.952854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.953014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.953050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.953328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.953363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.953622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.953658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.953897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.953932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.954143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.954177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.954467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.954502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.954644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.954681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.954935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.954969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.955198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.955233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.955421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.955457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.955662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.955698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.955904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.955938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.956192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.956228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.956500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.034 [2024-12-12 10:40:38.956535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.034 qpair failed and we were unable to recover it. 00:27:05.034 [2024-12-12 10:40:38.956822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.956857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.957060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.957096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.957293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.957328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.957466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.957500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.957739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.957776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.957930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.957965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.958122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.958157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.958453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.958488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.958695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.958732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.958964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.958999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.959268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.959303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.959513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.959548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.959707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.959747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.959948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.959983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.960176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.960211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.960486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.960521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.960732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.960767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.960880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.960914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.961044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.961077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.961318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.961354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.961560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.961609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.961809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.961844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.962056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.962090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.962362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.962396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.962614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.962653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.962840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.962875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.963076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.963158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.963476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.963514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.963745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.963782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.963982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.964016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.964213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.964247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.964374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.964408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.964615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.964651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.964882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.964916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.965115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.965149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.965358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.965392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.965670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.035 [2024-12-12 10:40:38.965706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.035 qpair failed and we were unable to recover it. 00:27:05.035 [2024-12-12 10:40:38.965928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.965962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.966094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.966128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.966335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.966379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.966671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.966707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.966873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.966907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.967049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.967083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.967239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.967273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.967548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.967592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.967794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.967828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.968031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.968065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.968341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.968375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.968629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.968665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.968805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.968839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.969053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.969088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.969233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.969267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.969458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.969493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.969707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.969742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.969949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.969982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.970197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.970231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.970410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.970444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.970631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.970667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.970923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.970957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.971164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.971198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.971325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.971360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.971638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.971675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.971833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.971868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.972072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.972106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.972403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.972437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.972681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.972717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.972930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.972965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.973256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.973291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.973625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.973660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.973877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.973912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.974114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.974149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.974309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.974342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.974559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.974621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.974883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.974919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.975127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.036 [2024-12-12 10:40:38.975160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.036 qpair failed and we were unable to recover it. 00:27:05.036 [2024-12-12 10:40:38.975426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.975461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.975674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.975710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.975942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.975975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.976156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.976191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.976385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.976425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.976617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.976652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.976949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.976984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.977131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.977166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.977374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.977408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.977630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.977667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.977874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.977907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.978052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.978086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.978361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.978395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.978653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.978689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.978920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.978954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.979165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.979200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.979384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.979416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.979634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.979669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.979880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.979914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.980217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.980252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.980530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.980565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.980824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.980861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.981133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.981166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.981445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.981480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.981752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.981789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.981942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.981976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.982256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.982290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.982546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.982589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.982859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.982893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.983079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.983113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.983317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.037 [2024-12-12 10:40:38.983352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.037 qpair failed and we were unable to recover it. 00:27:05.037 [2024-12-12 10:40:38.983547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.983592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.983781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.983816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.983960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.983993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.984195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.984229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.984497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.984532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.984771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.984807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.985032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.985066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.985322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.985356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.985632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.985668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.985872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.985907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.986100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.986135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.986419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.986453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.986643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.986680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.986817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.986858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.987133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.987168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.987449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.987482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.987767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.987804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.988001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.988035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.988232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.988266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.988400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.988433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.988661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.988697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.988887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.988920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.989184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.989218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.989411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.989445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.989720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.989756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.989950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.989984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.990267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.990302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.990594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.990630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.990834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.990870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.991069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.991103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.991367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.991401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.991663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.991700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.991852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.991886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.992188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.992222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.992382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.992415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.992550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.992593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.038 [2024-12-12 10:40:38.992883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.038 [2024-12-12 10:40:38.992918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.038 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.993130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.993165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.993427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.993462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.993649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.993685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.993827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.993861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.993989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.994024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.994274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.994309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.994591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.994627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.994820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.994855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.994990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.995023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.995289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.995323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.995600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.995635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.995851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.995885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.996134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.996169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.996282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.996316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.996499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.996534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.996770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.996804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.997009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.997049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.997349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.997383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.997640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.997677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.997941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.997976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.998161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.998195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.998451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.998486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.998694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.998731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.998987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.999021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.999201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.999234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.999529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.999563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:38.999876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:38.999912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.000050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.000085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.000273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.000308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.000506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.000542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.000765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.000800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.000996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.001030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.001238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.001273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.001547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.001592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.001871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.001906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.002100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.002135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.039 [2024-12-12 10:40:39.002345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.039 [2024-12-12 10:40:39.002378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.039 qpair failed and we were unable to recover it. 00:27:05.040 [2024-12-12 10:40:39.002579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.040 [2024-12-12 10:40:39.002615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.040 qpair failed and we were unable to recover it. 00:27:05.040 [2024-12-12 10:40:39.002834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.040 [2024-12-12 10:40:39.002868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.040 qpair failed and we were unable to recover it. 00:27:05.040 [2024-12-12 10:40:39.003010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.040 [2024-12-12 10:40:39.003044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.040 qpair failed and we were unable to recover it. 00:27:05.040 [2024-12-12 10:40:39.003340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.040 [2024-12-12 10:40:39.003374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.040 qpair failed and we were unable to recover it. 00:27:05.040 [2024-12-12 10:40:39.003586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.040 [2024-12-12 10:40:39.003622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.040 qpair failed and we were unable to recover it. 00:27:05.040 [2024-12-12 10:40:39.003836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.040 [2024-12-12 10:40:39.003871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.040 qpair failed and we were unable to recover it. 00:27:05.040 [2024-12-12 10:40:39.004023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.040 [2024-12-12 10:40:39.004058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.040 qpair failed and we were unable to recover it. 00:27:05.040 [2024-12-12 10:40:39.004285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.040 [2024-12-12 10:40:39.004318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.040 qpair failed and we were unable to recover it. 00:27:05.318 [2024-12-12 10:40:39.004520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-12-12 10:40:39.004556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-12-12 10:40:39.004823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-12-12 10:40:39.004859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-12-12 10:40:39.005075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-12-12 10:40:39.005108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-12-12 10:40:39.005405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-12-12 10:40:39.005438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-12-12 10:40:39.005682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-12-12 10:40:39.005719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-12-12 10:40:39.005874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-12-12 10:40:39.005908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-12-12 10:40:39.006136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-12-12 10:40:39.006171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-12-12 10:40:39.006435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-12-12 10:40:39.006470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-12-12 10:40:39.006729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-12-12 10:40:39.006765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.006967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.007001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.007211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.007247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.007380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.007425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.007576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.007611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.007905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.007939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.008078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.008112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.008411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.008445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.008701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.008736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.008957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.008991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.009129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.009164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.009294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.009328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.009522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.009556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.009796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.009831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.010087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.010121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.010421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.010455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.010694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.010730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.010918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.010952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.011103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.011137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.011433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.011468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.011737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.011772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.011923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.011957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.012078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.012109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.012237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.012270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.012425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.012459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.012675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.012712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.012852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.012885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.013013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.013048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.013267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.013302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.013557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.013617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.013815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.013848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.013961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.013995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.014126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.014161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.014360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.014394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.014648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.014685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.014870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-12-12 10:40:39.014906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-12-12 10:40:39.015110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.015144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.015369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.015403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.015658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.015694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.015842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.015875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.016072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.016106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.016250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.016283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.016504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.016538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.016685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.016719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.016839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.016871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.017029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.017062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.017339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.017372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.017677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.017713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.017859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.017892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.018097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.018131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.018354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.018388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.018598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.018633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.018843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.018877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.019024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.019058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.019259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.019294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.019548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.019591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.019822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.019856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.020018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.020053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.020384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.020417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.020643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.020678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.020811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.020845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.021051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.021085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.021283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.021317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.021498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.021531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.021767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.021803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.021957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.021991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.022142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.022175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.022449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.022483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.022669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.022704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.022868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.022902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.023040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.023080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.023294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-12-12 10:40:39.023327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-12-12 10:40:39.023465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.023499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.023725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.023760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.024022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.024057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.024277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.024311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.024566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.024607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.024816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.024851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.025110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.025144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.025325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.025360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.025607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.025643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.025801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.025834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.025984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.026019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.026257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.026291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.026490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.026524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.026773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.026808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.027010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.027045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.027263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.027297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.027585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.027621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.027774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.027813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.027972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.028006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.028149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.028184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.028307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.028343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.028593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.028629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.028823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.028858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.029060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.029095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.029314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.029348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.029556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.029602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.029810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.029845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.030029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.030063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.030331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.030365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.030509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.030543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.030808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.030842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.031007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.031041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.031276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.031310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.031506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.031541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.031811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.031846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.032050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.032084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-12-12 10:40:39.032380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-12-12 10:40:39.032414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.032609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.032645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.032903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.032943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.033217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.033252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.033596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.033632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.033793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.033827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.034099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.034134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.034330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.034365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.034628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.034665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.034891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.034926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.035132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.035167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.035437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.035472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.035729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.035766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.035957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.035991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.036222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.036255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.036506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.036541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.036702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.036737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.036887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.036921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.037114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.037148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.037278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.037312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.037580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.037617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.037907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.037942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.038224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.038259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.038454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.038488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.038750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.038786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.038927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.038967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.039238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.039269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.039419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.039452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.039677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.039713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.039971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.040005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.040213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.040248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.040449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.040483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.040689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.040724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.040995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.041030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.041310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.041344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.041549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.041592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.041793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-12-12 10:40:39.041827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-12-12 10:40:39.042022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.042057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.042339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.042374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.042509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.042541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.042706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.042741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.042956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.042991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.043246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.043286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.043551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.043597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.043924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.043959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.044146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.044180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.044458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.044491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.044744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.044781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.044918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.044951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.045231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.045265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.045406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.045439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.045643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.045678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.045879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.045913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.046212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.046246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.046447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.046481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.046708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.046744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.046980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.047015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.047281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.047316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.047614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.047650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.047913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.047948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.048157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.048191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.048461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.048495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.048809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.048845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.049144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.049180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.049464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.049499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.049685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.049720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.049916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.049951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.050254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.050288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.050541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.050582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-12-12 10:40:39.050789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-12-12 10:40:39.050824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.050977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.051011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.051299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.051333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.051459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.051495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.051732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.051766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.052040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.052074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.052283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.052317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.052535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.052580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.052795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.052829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.053042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.053075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.053296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.053332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.053620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.053655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.053929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.053963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.054224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.054265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.054488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.054522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.054743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.054779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.055006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.055040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.055300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.055334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.055537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.055581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.055778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.055813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.056022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.056057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.056250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.056284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.056483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.056517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.056795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.056831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.056977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.057012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.057336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.057372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.057582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.057617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.057786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.057821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.058049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.058084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.058240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.058274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.058501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.058536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.058749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.058785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.058931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.058965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.059161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.059196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.059516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.059549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.059764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.059799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.060025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.060058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.060337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.060370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.060550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.060595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-12-12 10:40:39.060794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-12-12 10:40:39.060828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.060987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.061022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.061251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.061285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.061502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.061537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.061834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.061869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.062066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.062100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.062382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.062417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.062642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.062678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.062908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.062942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.063137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.063171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.063370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.063404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.063663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.063699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.063907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.063942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.064236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.064270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.064468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.064507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.064809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.064845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.065051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.065085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.065219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.065254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.065544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.065587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.065798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.065832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.066087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.066121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.066240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.066272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.066565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.066610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.066766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.066801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.067054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.067089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.067360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.067395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.067602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.067638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.067836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.067871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.068094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.068129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.068316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.068350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.068605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.068640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.068921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.068955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.069163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.069198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.069322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.069353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.069629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.069664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.069885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.069920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.070060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.070093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.070313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.070349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.070645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-12-12 10:40:39.070681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-12-12 10:40:39.070821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.070854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.071072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.071106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.071398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.071432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.071656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.071692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.071947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.071981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.072283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.072317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.072519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.072554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.072752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.072786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.072990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.073025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.073150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.073182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.073383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.073418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.073615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.073651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.073865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.073899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.074227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.074261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.074491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.074525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.074787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.074829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.075082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.075116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.075360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.075394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.075591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.075627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.075837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.075872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.076071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.076105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.076302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.076336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.076619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.076655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.076879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.076913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.077099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.077134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.077359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.077393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.077546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.077589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.077716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.077751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.077914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.077950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.078144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.078178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.078474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.078510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.078676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.078711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.078922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.078957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.079244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.079278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.079552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.079596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.079877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.079912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.080111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.080145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.080327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.080363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.080561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-12-12 10:40:39.080607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-12-12 10:40:39.080849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.080884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.081136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.081171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.081469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.081503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.081769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.081805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.082056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.082091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.082215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.082249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.082503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.082537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.082702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.082737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.082882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.082916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.083115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.083150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.083442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.083477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.083720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.083757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.083968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.084003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.084273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.084308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.084528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.084563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.084721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.084757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.084993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.085033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.085242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.085276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.085553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.085595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.085803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.085840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.086036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.086070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.086345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.086380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.086610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.086646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.086882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.086917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.087128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.087163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.087371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.087406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.087671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.087708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.087935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.087971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.088107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.088142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.088347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.088382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.088579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.088615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.088882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.088917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.089110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.089145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.089326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.089361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.089605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.089642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-12-12 10:40:39.089778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-12-12 10:40:39.089813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.090069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.090104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.090392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.090426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.090623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.090658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.090867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.090902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.091105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.091141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.091362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.091397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.091613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.091650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.091851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.091887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.092028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.092063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.092269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.092303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.092511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.092546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.092826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.092861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.093139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.093174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.093364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.093399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.093543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.093584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.093830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.093865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.094074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.094108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.094406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.094441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.094721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.094757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.095043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.095078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.095384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.095423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.095664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.095701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.095895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.095929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.096288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.096322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.096545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.096587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.096797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.096831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.097088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.097123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.097448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.097483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.097716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.097751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.098025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.098060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.098191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.098226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.098429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.098464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.098674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.098710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.098938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.098976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.099261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.099297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.099520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.099554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.099864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-12-12 10:40:39.099900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-12-12 10:40:39.100105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.100140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.100267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.100302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.100427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.100466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.100734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.100771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.100978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.101012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.101292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.101328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.101529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.101566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.101832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.101869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.102125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.102159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.102353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.102389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.102623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.102659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.102885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.102920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.103045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.103080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.103289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.103324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.103601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.103637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.103819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.103853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.104047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.104082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.104290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.104326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.104553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.104598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.104853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.104888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.105009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.105043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.105236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.105271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.105465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.105500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.105777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.105821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.106035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.106069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.106317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.106352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.106511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.106546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.106756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.106791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.107067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.107102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.107352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.107386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.107595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.107630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.107831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.107865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.108097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.108131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.108287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.108321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.108616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.108655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.108859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.108894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.109025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.109059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.109256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.109291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.109434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-12-12 10:40:39.109468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-12-12 10:40:39.109714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.109749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.109962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.109997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.110217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.110251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.110455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.110489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.110676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.110712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.110851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.110885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.111089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.111123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.111444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.111478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.111626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.111662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.111915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.111949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.112153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.112187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.112457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.112492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.112700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.112736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.113013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.113048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.113321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.113355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.113591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.113626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.113781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.113816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.114094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.114128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.114335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.114369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.114665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.114701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.114983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.115018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.115322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.115356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.115557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.115600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.115853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.115888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.116029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.116069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.116302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.116337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.116624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.116660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.116830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.116864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.117066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.117101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.117235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.117269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.117400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.117435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.117631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.117667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.117796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.117830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.118014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.118049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.118322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.118357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.118609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.118644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.118926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.118962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.119092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.119125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.119264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.119299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-12-12 10:40:39.119585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-12-12 10:40:39.119620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.119825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.119860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.120065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.120100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.120306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.120341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.120545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.120588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.120706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.120740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.120899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.120935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.121142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.121177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.121322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.121357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.121615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.121651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.121838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.121872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.122068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.122102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.122298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.122334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.122529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.122563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.122713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.122748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.122950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.122985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.123193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.123227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.123437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.123471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.123751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.123787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.123988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.124022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.124243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.124277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.124526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.124560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.124851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.124887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.125160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.125195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.125488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.125524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.125677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.125720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.125913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.125948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.126162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.126196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.126392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.126427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.126557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.126613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.126887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.126921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.127172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.127206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.127525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.127559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.127851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.127888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.128034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.128069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.128268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.128302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.128581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.128617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.128829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.128863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.128991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.129025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-12-12 10:40:39.129313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-12-12 10:40:39.129349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.129559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.129604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.129742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.129776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.129986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.130020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.130304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.130339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.130532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.130567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.130786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.130820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.130963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.130998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.131253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.131288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.131433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.131468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.131678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.131714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.132019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.132053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.132269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.132303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.132580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.132616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.132868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.132902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.133049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.133084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.133342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.133377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.133648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.133683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.133942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.133977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.134239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.134273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.134562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.134617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.134787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.134821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.135052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.135085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.135222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.135255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.135529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.135563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.135855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.135890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.136113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.136153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.136455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.136489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.136682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.136718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.136945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.136981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.137236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.137271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.137468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.137502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.137733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.137769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.137975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.138011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.138219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-12-12 10:40:39.138253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-12-12 10:40:39.138554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.138597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.138791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.138827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.139011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.139045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.139264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.139298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.139556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.139600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.139863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.139898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.140022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.140057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.140300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.140335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.140591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.140628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.140812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.140847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.141076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.141110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.141385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.141420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.141642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.141679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.141868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.141903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.142157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.142192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.142332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.142366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.142625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.142661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.142870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.142903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.143163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.143199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.143484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.143520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.143734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.143770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.143920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.143955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.144079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.144113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.144441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.144476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.144712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.144747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.144954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.144989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.145209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.145244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.145427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.145461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.145665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.145700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.145899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.145933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.146193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.146228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.146369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.146403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.146640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.146676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.146875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.146910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.147192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.147225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.147477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.147513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.147723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.147760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.148023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.148056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-12-12 10:40:39.148205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-12-12 10:40:39.148240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.148429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.148463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.148665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.148701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.148853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.148889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.149088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.149123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.149402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.149437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.149642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.149678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.149845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.149880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.150085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.150120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.150430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.150465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.150750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.150786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.150932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.150966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.151173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.151207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.151415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.151450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.151667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.151703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.151918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.151954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.152209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.152244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.152549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.152592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.152727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.152761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.152912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.152946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.153150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.153190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.153444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.153478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.153693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.153729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.153989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.154024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.154302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.154336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.154538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.154583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.154843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.154878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.155083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.155117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.155321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.155356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.155653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.155689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.155947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.155981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.156248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.156282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.156536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.156579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.156741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.156775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.157003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.157037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.157352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.157387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.157640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.157677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.157815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.157850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.157979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.158013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-12-12 10:40:39.158159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-12-12 10:40:39.158194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.158351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.158385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.158630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.158667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.158874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.158909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.159112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.159147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.159278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.159312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.159494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.159527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.159668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.159704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.159969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.160004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.160210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.160245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.160540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.160585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.160721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.160755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.160961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.160996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.161286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.161321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.161518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.161553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.161725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.161760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.161898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.161931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.162132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.162167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.162385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.162420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.162610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.162647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.162901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.162936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.163192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.163232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.163376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.163410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.163617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.163653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.163769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.163804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.163958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.163992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.164173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.164207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.164400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.164436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.164709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.164746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.164968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.165002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.165150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.165184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.165402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.165437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.165682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.165718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.165880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.165915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.166065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.166099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.166381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.166416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.166696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.166732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.166958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.166994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.167220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.167254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.167401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-12-12 10:40:39.167437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-12-12 10:40:39.167698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.167734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.167940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.167973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.168278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.168312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.168579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.168615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.168816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.168851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.169064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.169099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.169295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.169329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.169633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.169669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.169931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.169966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.170224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.170258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.170470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.170505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.170640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.170675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.170831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.170865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.171054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.171089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.171222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.171257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.171413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.171447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.171701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.171737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.171881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.171916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.172147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.172182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.172469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.172505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.172786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.172823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.173065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.173105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.173345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.173379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.173602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.173638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.173912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.173947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.174128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.174163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.174416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.174450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.174607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.174643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.174877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.174911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.175213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.175248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.175465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.175500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.175756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.175792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.176003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.176039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.176233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.176267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-12-12 10:40:39.176454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-12-12 10:40:39.176490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.176723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.176759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.176889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.176924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.177070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.177104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.177327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.177362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.177662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.177698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.177883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.177917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.178140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.178175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.178371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.178405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.178621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.178656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.178821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.178856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.179001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.179035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.179332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.179367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.179612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.179649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.179875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.179910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.180163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.180197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.180399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.180433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.180673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.180709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.180938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.180973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.181204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.181239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.181444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.181479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.181752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.181788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.181991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.182026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.182307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.182342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.182621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.182657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.182805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.182840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.183104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.183139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.183347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.183388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.183658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.183696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.183900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.183934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.184073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.184107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.184297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.184331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.184602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.184638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.184864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.184899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.185042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.185078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.185377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.185412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.185623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.185659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.185986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.186022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.186250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-12-12 10:40:39.186284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-12-12 10:40:39.186547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.186590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.186795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.186829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.186967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.187002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.187270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.187305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.187512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.187548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.187814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.187849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.188073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.188108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.188244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.188279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.188588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.188623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.188825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.188860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.189068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.189103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.189254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.189288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.189486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.189521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.189742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.189777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.189920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.189955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.190170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.190205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.190418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.190452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.190587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.190623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.190748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.190783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.190966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.191001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.191228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.191263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.191413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.191449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.191718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.191755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.191899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.191933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.192159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.192194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.192385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.192419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.192603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.192637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.192756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.192790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.193064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.193104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.193384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.193419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.193637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.193673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.193896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.193931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.194075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.194110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.194226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.194261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-12-12 10:40:39.194491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-12-12 10:40:39.194525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.194711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.194746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.195003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.195037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.195303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.195337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.195547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.195592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.195741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.195775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.195983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.196018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.196271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.196305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.196506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.196541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.196819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.196855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.197060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.197094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.197450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.197485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.197695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.197733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.197922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.197956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.198163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.198197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.198390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.198424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.198555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.198607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.198840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.198875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.198992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.199027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.199247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.199282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.199536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.199582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.199749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.199784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.199932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.199966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.200194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.200229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.200483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.200518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.200721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.200756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.200977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.201012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.201222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.201257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.201470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.201504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.201637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.201672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.201867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.201902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.202106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.202141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.202342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.202377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.202579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.202614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.202740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.202781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.202931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.202965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.203171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.203206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.203386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.203423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.203655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.203692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.203832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-12-12 10:40:39.203867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-12-12 10:40:39.203996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.204032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.204250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.204284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.204559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.204602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.204737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.204772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.204911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.204945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.205288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.205324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.205471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.205506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.205803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.205839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.205990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.206025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.206172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.206209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.206507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.206539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.206775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.206809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.206965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.207000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.207165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.207200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.207383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.207417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.207613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.207650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.207870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.207903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.208057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.208092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.208405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.208440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.208589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.208626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.208828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.208862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.209132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.209166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.209393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.209427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.209654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.209689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.209836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.209871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.210081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.210117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.210327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.210361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.210664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.210701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.210909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.210943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.211198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.211234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.211430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.211465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.211610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.211645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.211794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.211828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.211966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.212000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.212199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.212240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.212472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.212506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.212804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.212839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.212974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.213009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.213218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-12-12 10:40:39.213252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-12-12 10:40:39.213450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.213484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.213697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.213733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.213922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.213956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.214160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.214194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.214434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.214468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.214726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.214762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.214903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.214938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.215075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.215109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.215384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.215419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.215685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.215722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.215934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.215968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.216175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.216209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.216404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.216439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.216641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.216677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.216872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.216907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.217091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.217126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.217430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.217465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.217696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.217732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.218003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.218037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.218173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.218208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.218363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.218398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.218588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.218624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.218768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.218804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.219024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.219059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.219185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.219219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.219528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.219562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.219722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.219757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.220035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.220070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.220327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.220362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.220590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.220629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.220783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.220818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.221040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.221075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.221395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.221430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.221580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.221615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.221870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.221905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.222164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.222204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.222396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.222430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.222704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.222742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.222947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-12-12 10:40:39.222982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-12-12 10:40:39.223265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.223300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.223516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.223550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.223854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.223889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.224096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.224131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.224319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.224354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.224628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.224663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.224802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.224837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.225052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.225087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.225407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.225443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.225677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.225712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.226023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.226058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.226335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.226369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.226583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.226619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.226910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.226946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.227209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.227244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.227536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.227581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.227852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.227886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.228091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.228127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.228399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.228434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.228692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.228729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.228985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.229020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.229336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.229371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.229665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.229701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.229967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.230002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.230308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.230342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.230540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.230584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.230738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.230773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.230969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.231004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.231215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.231250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.231503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.231538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.231734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.231770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.231957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.231992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.232245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.232280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.232531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.232567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.232817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.232852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.233036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-12-12 10:40:39.233070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-12-12 10:40:39.233361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.233401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.233725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.233762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.233971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.234006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.234355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.234389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.234606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.234643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.234881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.234916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.235115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.235149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.235404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.235439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.235723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.235759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.235959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.235993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.236273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.236307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.236534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.236578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.236787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.236822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.237016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.237050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.237258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.237293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.237499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.237534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.237684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.237734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.237924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.237959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.238160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.238196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.238409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.238444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.238647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.238683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.238890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.238925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.239209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.239244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.239520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.239554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.239703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.239738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.239860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.239894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.240160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.240195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.240393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.240428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.240697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.240733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.241019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.241054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.241331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.241366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.241562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.241606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.241758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.241793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.242050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.242085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.242209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.242244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.242516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.242552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.242716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.242752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-12-12 10:40:39.242897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-12-12 10:40:39.242931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.243079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.243114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.243384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.243421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.243621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.243663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.243866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.243902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.244046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.244081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.244379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.244414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.244626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.244661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.244813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.244847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.245002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.245037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.245246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.245280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.245478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.245513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.245712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.245748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.245865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.245899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.246147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.246182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.246386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.246421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.246629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.246666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.246872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.246908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.247110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.247145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.247396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.247431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.247681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.247717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.247883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.247917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.248109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.248144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.248421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.248457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.248663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.248699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.248818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.248852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.248986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.249021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.249303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.249338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.249528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.249563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.249800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.249831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.250096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.250173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.250403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.250443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.250657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.250693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.250894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.250929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.251072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.251107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.251432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.251466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.251589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.251625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.251838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.251874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.252220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.252254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-12-12 10:40:39.252509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-12-12 10:40:39.252544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.252764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.252800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.253000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.253034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.253324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.253358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.253493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.253539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.253720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.253755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.253964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.253998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.254143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.254177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.254381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.254415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.254744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.254779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.254991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.255025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.255378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.255412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.255628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.255664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.255821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.255854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.256104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.256139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.256410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.256445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.256736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.256771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.256964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.256998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.257287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.257323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.257554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.257602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.257808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.257843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.258051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.258085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.258285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.258320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.258633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.258669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.258816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.258850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.259071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.259105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.259329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.259363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.259655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.259691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.259902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.259936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.260091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.260128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.260331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.260365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.260643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.260679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.260841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.260875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.261003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.261038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.261328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.261362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.261568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.261611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.261773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.261807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.261961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.261996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.262143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.262185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-12-12 10:40:39.262425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-12 10:40:39.262460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.262744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.262781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.262926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.262961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.263116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.263149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.263426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.263461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.263668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.263709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.263913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.263947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.264141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.264177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.264454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.264488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.264782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.264817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.265020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.265055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.265258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.265292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.265545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.265589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.265845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.265879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.266080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.266116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.266340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.266374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.266656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.266692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.266994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.267030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.267379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.267415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.267673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.267709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.267900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.267935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.268085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.268119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.268394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.268429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.268624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.268660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.268797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.268831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.269043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.269078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.269301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.269334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.269547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.269591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.269735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.269770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.269902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.269936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.270061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.270094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.270342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.270377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.270648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.270729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.270954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.271033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.271296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.271334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.271503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.271538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.271718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.271758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.271967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.272001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.272152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.272188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-12-12 10:40:39.272424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-12-12 10:40:39.272462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.272731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.272767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.272979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.273018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.273174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.273209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.273489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.273524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.273679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.273713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.273843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.273877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.274020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.274054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.274309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.274344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.274625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.274660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.274806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.274841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.274983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.275018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.275247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.275284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.275493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.275526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.275676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.275712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.275922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.275957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.276160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.276194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.276407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.276442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.276682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.276718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.276948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.276983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.277145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.277179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.277379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.277413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.277698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.277735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.277947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.277985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.278139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.278175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.278399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.278434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.278715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.278750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.278952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.278988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.279218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.279251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.279505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.279539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.281808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.281888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.282086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.282125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.282331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.282368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.282568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.282627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.282830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-12-12 10:40:39.282864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-12-12 10:40:39.283026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.283060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.283402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.283436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.283684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.283720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.283871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.283906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.284114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.284150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.284373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.284407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.284607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.284643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.284898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.284932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.285118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.285151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.285478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.285513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.285812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.285848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.286043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.286077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.286321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.286355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.286632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.286667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.286928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.286962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.287103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.287138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.287356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.287390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.287595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.287632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.287886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.287921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.288125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.288159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.288432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.288466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.288733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.288768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.288975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.289009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.289221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.289254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.289472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.289506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.289809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.289845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.290110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.290145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.290359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.290393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.290623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.290659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.290815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.290849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.290983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.291016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.291163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.291197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.291406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.291440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.291640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.291675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.291890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.291925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.292134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.292169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.292392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.292426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.292628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.292663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.292809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-12-12 10:40:39.292850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-12-12 10:40:39.293002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.293035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.293333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.293367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.293681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.293717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.293871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.293906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.294059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.294093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.294303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.294338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.294582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.294617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.294873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.294907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.295108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.295141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.295353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.295386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.295668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.295703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.295854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.295888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.296169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.296203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.296434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.296467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.296735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.296771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.296910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.296944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.297133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.297168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.297441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.297476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.297677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.297713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.297855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.297889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.298121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.298155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.298472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.298505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.298729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.298764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.298921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.298954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.299108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.299142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.299357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.299391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.299529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.299562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.299727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.299763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.299986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.300020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.300219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.300253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.300453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.300487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.300685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.300721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.301000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.301034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.301186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.301221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.301409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.301444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.301678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.301713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.301855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.301889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.302112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.302146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.302471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-12-12 10:40:39.302506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-12-12 10:40:39.302724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.302765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.302971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.303007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.303169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.303203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.303431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.303465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.303678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.303714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.303943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.303978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.304112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.304148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.304347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.304381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.304539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.304583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.304861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.304896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.305057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.305094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.305232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.305265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.305553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.305598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.305736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.305772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.305933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.305967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.306123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.306157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.306407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.306441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.306676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.306713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.306862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.306897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.307047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.307083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.307299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.307333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.307544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.307586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.307746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.307780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.307905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.307938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.308072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.308106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.308330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.308365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.308552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.308594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.308853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.308886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.309038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.309072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.309281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.309315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.309613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.309649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.309797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.309831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.310038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.310072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.310386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.310420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.310629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.310663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.310816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.310851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.310989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.311023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.311319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.311354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.311625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.311662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-12-12 10:40:39.311856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-12-12 10:40:39.311890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.312121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.312162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.312440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.312474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.312716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.312752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.312945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.312978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.313117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.313151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.313445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.313479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.313699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.313735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.313989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.314023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.314238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.314272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.314519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.314553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.314713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.314748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.315026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.315059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.315338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.315372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.315507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.315541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.315817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.315852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.316039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.316075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.316329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.316364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.316566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.316614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.316764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.316799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.317054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.317088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.317218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.317252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.317451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.317486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.317744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.317781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.317983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.318018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.318218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.318253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.318504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.318538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.318812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.318848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.318999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.319032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.319223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.319256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.319530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.319564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.319829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.319864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.320069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.320102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.320384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.320418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.320568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.351 [2024-12-12 10:40:39.320616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.351 qpair failed and we were unable to recover it. 00:27:05.351 [2024-12-12 10:40:39.320822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.352 [2024-12-12 10:40:39.320856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.352 qpair failed and we were unable to recover it. 00:27:05.352 [2024-12-12 10:40:39.320994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.352 [2024-12-12 10:40:39.321028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.352 qpair failed and we were unable to recover it. 00:27:05.352 [2024-12-12 10:40:39.321363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.352 [2024-12-12 10:40:39.321398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.352 qpair failed and we were unable to recover it. 00:27:05.352 [2024-12-12 10:40:39.321700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.352 [2024-12-12 10:40:39.321737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.352 qpair failed and we were unable to recover it. 00:27:05.352 [2024-12-12 10:40:39.321947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.352 [2024-12-12 10:40:39.321982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.352 qpair failed and we were unable to recover it. 00:27:05.352 [2024-12-12 10:40:39.322281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.352 [2024-12-12 10:40:39.322315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.352 qpair failed and we were unable to recover it. 00:27:05.352 [2024-12-12 10:40:39.322564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.322614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.322754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.322788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.322975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.323009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.323331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.323365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.323491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.323526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.323741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.323775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.324090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.324124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.324314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.324348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.324533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.324567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.324855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.324889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.325114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.325149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.325429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.325462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.325648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.325684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.325948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.325983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.326266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.326299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.326511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.326545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.326854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.326888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.327073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.327107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.327305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.629 [2024-12-12 10:40:39.327340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.629 qpair failed and we were unable to recover it. 00:27:05.629 [2024-12-12 10:40:39.327593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.327629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.327834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.327867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.328137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.328171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.328454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.328488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.328768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.328802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.329019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.329052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.329311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.329344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.329597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.329633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.329927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.329961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.330151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.330186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.330442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.330477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.330661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.330696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.330904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.330939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.331217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.331251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.331537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.331579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.331851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.331886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.332179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.332213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.332362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.332396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.332528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.332563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.332781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.332816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.333000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.333033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.333278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.333319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.333535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.333579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.333807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.333841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.334031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.334065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.334269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.334304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.334557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.334608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.334829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.334863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.335051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.335085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.335416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.335451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.335659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.335695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.335973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.336009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.336222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.336255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.336507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.336541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.336775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.336810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.337014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.337047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.337326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.337360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.337619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.337656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.630 [2024-12-12 10:40:39.337857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.630 [2024-12-12 10:40:39.337890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.630 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.338160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.338195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.338406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.338440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.338694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.338729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.338927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.338962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.339218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.339253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.339560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.339606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.339861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.339895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.340189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.340224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.340430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.340464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.340747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.340783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.341034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.341068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.341341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.341374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.341652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.341688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.341837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.341872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.342066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.342100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.342355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.342390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.342641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.342676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.342951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.342985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.343266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.343301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.343591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.343627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.343900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.343935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.344214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.344248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.344515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.344555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.344753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.344787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.345009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.345043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.345320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.345354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.345588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.345624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.345820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.345854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.345967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.346001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.346194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.346228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.346425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.346460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.346699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.346736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.346946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-12-12 10:40:39.346980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-12-12 10:40:39.347231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.347265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.347541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.347583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.347811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.347846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.348128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.348162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.348284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.348318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.348593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.348628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.348907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.348941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.349161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.349196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.349450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.349485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.349696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.349731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.349976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.350010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.350232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.350267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.350465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.350499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.350723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.350760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.351062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.351096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.351353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.351387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.351644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.351681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.351980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.352014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.352229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.352264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.352535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.352577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.352839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.352875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.353085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.353119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.353367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.353402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.353663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.353699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.353892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.353926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.354199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.354233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.354516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.354550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.354832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.354867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.355109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.355143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.355396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.355437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.355697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.355732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.355939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.355973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.356234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.356269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.356477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.356512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.356708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.356744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.357012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.357046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.357247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.357280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.357561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.357601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-12-12 10:40:39.357868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-12-12 10:40:39.357901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.358187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.358221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.358371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.358405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.358594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.358629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.358906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.358940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.359129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.359164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.359380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.359416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.359693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.359729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.359936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.359970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.360202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.360237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.360458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.360493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.360761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.360797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.361089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.361124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.361330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.361364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.361550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.361592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.361779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.361813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.362089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.362123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.362392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.362426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.362698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.362733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.362868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.362902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.363104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.363138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.363350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.363386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.363641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.363677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.363899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.363934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.364232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.364266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.364550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.364592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.364788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.364823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.365056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.365090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.365285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.365320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.365511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.365546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.365762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.365797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.366042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.366083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.366355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.366389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.366644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.366679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.366886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.366921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.367172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.367206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.367433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.367467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.367745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.367781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.368013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.368048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-12-12 10:40:39.368260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-12-12 10:40:39.368294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.368477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.368512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.368720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.368755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.369013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.369047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.369340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.369374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.369593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.369627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.369939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.369973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.370252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.370287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.370565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.370612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.370891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.370924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.371187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.371220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.371517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.371551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.371847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.371881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.372131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.372165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.372441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.372475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.372759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.372794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.373075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.373108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.373360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.373393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.373659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.373693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.373980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.374013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.374290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.374323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.374635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.374671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.374930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.374963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.375216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.375250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.375546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.375589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.375893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.375926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.376072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.376106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.376306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.376340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.376629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.376663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.376851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.376884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.377024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-12-12 10:40:39.377059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-12-12 10:40:39.377338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.377372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.377654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.377695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.377969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.378004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.378208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.378240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.378495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.378529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.378733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.378768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.378988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.379022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.379204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.379238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.379525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.379559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.379878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.379914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.380115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.380150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.380390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.380424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.380641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.380677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.380881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.380915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.381030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.381063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.381329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.381364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.381641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.381678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.381961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.381994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.382244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.382279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.382472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.382507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.382792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.382828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.383075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.383108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.383359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.383393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.383700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.383736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.383939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.383973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.384222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.384256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.384467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.384502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.384702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.384738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.385033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.385066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.385210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.385243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.385519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.385558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.385867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.385902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.386175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.386210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.386495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.386529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.386809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.386845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.387047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.387082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.387339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.387374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.387560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.387616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.387878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-12-12 10:40:39.387913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-12-12 10:40:39.388171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.388206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.388492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.388528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.388805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.388846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.389127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.389162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.389417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.389451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.389593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.389628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.389907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.389941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.390192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.390226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.390510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.390544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.390832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.390868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.391162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.391195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.391475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.391508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.391798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.391833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.392128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.392161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.392452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.392486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.392642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.392677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.392826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.392861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.393136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.393171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.393367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.393402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.393656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.393693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.393995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.394028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.394330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.394365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.394657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.394692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.394961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.394996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.395260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.395294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.395610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.395646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.395834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.395869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.396068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.396103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.396323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.396358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.396622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.396658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.396893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.396928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.397184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.397219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.397524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.397558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.397849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.397884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.398186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.398220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.398419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.398453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.398593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.398628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.398907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.398941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-12-12 10:40:39.399148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-12-12 10:40:39.399181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.399433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.399466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.399683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.399719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.399995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.400029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.400313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.400353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.400633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.400668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.400942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.400977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.401110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.401144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.401343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.401378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.401654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.401690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.401875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.401910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.402177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.402211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.402482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.402516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.402742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.402778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.402960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.402993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.403180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.403214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.403486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.403519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.403793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.403828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.404023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.404058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.404246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.404280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.404531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.404566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.404854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.404889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.405162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.405197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.405408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.405443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.405700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.405736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.405877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.405911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.406093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.406127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.406312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.406345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.406625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.406660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.406921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.406957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.407209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.407243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.407377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.407411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.407690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.407726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.407910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.407944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.408210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.408245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.408521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.408555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.408880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.408915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.409114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.409148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.409333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.409366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-12-12 10:40:39.409644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-12-12 10:40:39.409680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.409946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.409980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.410273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.410309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.410587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.410623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.410841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.410874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.411146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.411184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.411398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.411431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.411635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.411671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.411956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.411990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.412169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.412202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.412405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.412438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.412645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.412681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.412862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.412895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.413034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.413068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.413344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.413378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.413679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.413714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.413912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.413946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.414171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.414206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.414469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.414503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.414802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.414838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.415143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.415177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.415438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.415471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.415660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.415694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.415897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.415930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.416183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.416218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.416440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.416474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.416755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.416791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.417024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.417058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.417332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.417367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.417657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.417693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.417966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.418000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.418218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.418252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.418457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.418497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.418714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.418751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.418966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-12-12 10:40:39.419001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-12-12 10:40:39.419255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.419288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.419547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.419606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.419824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.419858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.419987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.420021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.420221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.420256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.420455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.420489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.420769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.420806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.421047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.421082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.421346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.421380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.421632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.421669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.421922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.421957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.422195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.422230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.422379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.422413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.422631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.422666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.422856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.422891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.423144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.423179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.423458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.423492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.423624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.423659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.423874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.423909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.424197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.424232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.424528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.424562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.424845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.424881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.425067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.425101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.425369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.425403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.425672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.425708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.426005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.426039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.426304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.426339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.426634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.426669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.426905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.426939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.427136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.427170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.427372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.427406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.427520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.427555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.427799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.427833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.428034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.428069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.428230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.428264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.428449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.428483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.428671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.428707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.428990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.429030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.429232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-12-12 10:40:39.429265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-12-12 10:40:39.429459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.429493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.429768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.429805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.430086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.430121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.430348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.430381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.430520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.430553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.430842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.430877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.431034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.431068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.431289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.431324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.431613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.431649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.431871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.431906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.432207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.432241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.432524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.432559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.432777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.432812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.433011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.433044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.433240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.433274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.433524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.433557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.433769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.433803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.434063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.434097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.434282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.434315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.434442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.434475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.434671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.434706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.434999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.435033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.435234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.435268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.435450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.435483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.435760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.435797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.435947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.435983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.436187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.436221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.436479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.436514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.436736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.436772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.437048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.437081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.437321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.437355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.437661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.437697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.437982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.438016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.438140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.438175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.438377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.438411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.438619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.438653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.438910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.438945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.439128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.439163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-12-12 10:40:39.439387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-12-12 10:40:39.439433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.439646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.439681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.439936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.439970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.440166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.440201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.440480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.440514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.440820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.440857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.441140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.441174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.441455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.441488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.441685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.441721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.441912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.441946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.442224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.442258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.442407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.442441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.442696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.442731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.442987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.443021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.443220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.443254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.443530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.443565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.443877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.443912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.444191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.444225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.444409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.444445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.444644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.444680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.444932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.444966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.445184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.445218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.445469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.445503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.445729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.445766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.446048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.446082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.446227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.446262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.446563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.446610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.446892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.446928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.447045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.447079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.447356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.447390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.447593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.447630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.447883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.447917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.448125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.448159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.448412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.448445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.448606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.448643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.448897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.448931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.449181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.449215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.449436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.449470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.449725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.449762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-12-12 10:40:39.450039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-12-12 10:40:39.450073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.450357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.450397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.450671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.450707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.450918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.450953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.451226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.451260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.451510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.451545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.451760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.451795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.452044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.452078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.452260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.452295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.452594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.452631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.452754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.452788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.452990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.453024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.453222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.453256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.453475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.453509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.453720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.453755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.453955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.453989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.454256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.454291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.454477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.454510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.454701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.454737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.454920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.454953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.455152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.455186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.455461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.455494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.455751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.455786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.456095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.456129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.456416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.456451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.456729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.456765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.457068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.457102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.457353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.457387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.457685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.457721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.457993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.458028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.458185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.458219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.458490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.458525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.458812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.458848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.459118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.459152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.459401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.459435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.459686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.459722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.460027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.460063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.460337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.460371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.460599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.460634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-12-12 10:40:39.460848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-12-12 10:40:39.460883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.461066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.461101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.461292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.461332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.461591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.461626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.461901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.461935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.462218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.462251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.462532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.462565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.462853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.462887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.463073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.463106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.463301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.463334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.463620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.463655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.463936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.463970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.464248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.464282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.464566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.464620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.464901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.464935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.465199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.465233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.465532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.465567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.465732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.465766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.465969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.466002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.466205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.466238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.466513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.466546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.466843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.466878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.467145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.467178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.467438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.467471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.467772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.467808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.468001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.468035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.468316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.468350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.468640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.468676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.468951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.468986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.469272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.469308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.469591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.469626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.469903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.469937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.470259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.470292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.470563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-12-12 10:40:39.470608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-12-12 10:40:39.470888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.470922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.471217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.471252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.471464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.471500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.471774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.471810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.472043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.472077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.472356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.472390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.472597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.472633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.472912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.472946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.473153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.473193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.473385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.473418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.473598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.473633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.473845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.473879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.474132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.474166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.474463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.474496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.474786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.474821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.475098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.475131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.475410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.475443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.475675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.475710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.475901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.475935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.476120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.476154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.476265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.476299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.476602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.476637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.476919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.476954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.477079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.477113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.477293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.477327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.477522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.477555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.477842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.477876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.478184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.478218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.478473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.478507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.478797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.478832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.478961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.478994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.479176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.479209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.479408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.479441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.479734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.479770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.480005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.480040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.480318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.480353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.480633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.480669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.480948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-12-12 10:40:39.480983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-12-12 10:40:39.481259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.481293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.481594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.481630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.481837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.481870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.482057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.482091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.482278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.482312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.482510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.482544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.482748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.482783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.483005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.483038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.483290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.483324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.483633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.483668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.483941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.483981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.484235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.484272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.484588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.484624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.484820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.484854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.484985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.485019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.485296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.485330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.485596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.485631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.485827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.485861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.486044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.486079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.486359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.486393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.486669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.486705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.486988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.487024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.487274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.487309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.487509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.487543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.487820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.487855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.488133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.488167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.488376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.488410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.488531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.488563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.488829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.488864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.489117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.489149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.489400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.489433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.489687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.489723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.490000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.490034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.490231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.490265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.490448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.490481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.490758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.490794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.491064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.491098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.491393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.491429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-12-12 10:40:39.491699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-12-12 10:40:39.491736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.491948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.491983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.492208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.492242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.492519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.492554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.492839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.492875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.493078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.493112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.493382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.493415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.493605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.493642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.493926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.493960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.494237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.494270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.494555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.494598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.494873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.494908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.495145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.495189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.495461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.495495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.495723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.495759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.495983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.496017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.496229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.496264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.496541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.496603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.496859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.496894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.497169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.497203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.497456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.497490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.497675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.497710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.497934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.497969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.498244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.498278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.498533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.498567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.498778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.498813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.499077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.499113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.499317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.499350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.499634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.499670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.499950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.499984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.500185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.500219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.500408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.500442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.500694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.500729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.501005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.501039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.501237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.501271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.501488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.501522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.501833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.501869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.502051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.502085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.502390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.502425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.502702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-12-12 10:40:39.502738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-12-12 10:40:39.503020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.503054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.503338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.503372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.503647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.503681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.503966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.504001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.504282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.504316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.504599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.504634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.504915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.504949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.505226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.505260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.505534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.505567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.505773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.505808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.505992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.506026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.506238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.506272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.506481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.506521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.506807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.506843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.507112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.507146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.507331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.507364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.507579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.507615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.507834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.507867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.508083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.508117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.508399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.508434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.508715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.508751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.509029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.509062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.509341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.509374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.509664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.509698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.509887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.509921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.510148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.510181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.510375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.510409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.510666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.510702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.510956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.510989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.511241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.511274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.511582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.511617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.511816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.511849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.512128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.512161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.512465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-12-12 10:40:39.512498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-12-12 10:40:39.512717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.512752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.513009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.513043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.513321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.513354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.513633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.513667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.513912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.513946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.514153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.514188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.514437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.514470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.514694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.514729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.515005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.515039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.515326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.515359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.515601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.515637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.515821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.515855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.516050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.516084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.516336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.516370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.516562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.516607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.516887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.516921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.517216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.517251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.517520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.517553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.517847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.517888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.518156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.518190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.518388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.518421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.518688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.518723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.518927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.518961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.519234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.519268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.519474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.519507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.519706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.519742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.520015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.520050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.520337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.520371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.520511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.520545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.520830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.520864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.521085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.521118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.521322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.521355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.521545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.521600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.521800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.521834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.522052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.522085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.522387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.522421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.522684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.522718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.522959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.522994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-12-12 10:40:39.523193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-12-12 10:40:39.523226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.523428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.523462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.523747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.523782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.524058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.524092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.524275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.524309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.524516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.524550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.524764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.524799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.524987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.525021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.525274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.525307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.525515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.525549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.525820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.525854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.526062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.526096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.526289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.526322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.526595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.526631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.526885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.526918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.527214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.527248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.527521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.527555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.527778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.527813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.528066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.528100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.528360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.528392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.528612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.528652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.528904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.528939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.529196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.529230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.529424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.529459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.529739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.529774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.529962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.529995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.530180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.530214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.530396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.530430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.530707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.530742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.530929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.530964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.531226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.531260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.531448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.531482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.531759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.531795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.531989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.532023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.532237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.532271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.532521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.532556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.532869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.532903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.533099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.533132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.533384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.533418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-12-12 10:40:39.533622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-12-12 10:40:39.533657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.533856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.533889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.534163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.534197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.534453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.534486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.534680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.534716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.534900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.534933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.535213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.535248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.535431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.535465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.535747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.535783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.536005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.536040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.536317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.536352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.536606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.536642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.536797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.536831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.537082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.537116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.537246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.537280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.537534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.537567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.537763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.537798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.538078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.538111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.538316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.538350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.538500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.538535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.538807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.538841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.538978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.539018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.539269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.539303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.539602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.539638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.539925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.539960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.540085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.540119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.540397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.540431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.540685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.540720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.540914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.540949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.541226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.541259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.541461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.541496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.541717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.541754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.541949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.541982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.542177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.542211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.542412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.542447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.542663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.542700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.542979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.543013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.543295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.543329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.543607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.543642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-12-12 10:40:39.543923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-12-12 10:40:39.543956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.544185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.544219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.544431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.544465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.544697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.544733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.544950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.544984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.545238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.545273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.545469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.545503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.545787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.545822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.546070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.546105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.546311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.546344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.546478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.546510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.546777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.546812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.547076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.547110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.547319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.547354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.547621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.547656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.547854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.547887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.548157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.548190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.548398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.548433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.548620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.548656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.548771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.548806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.548998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.549032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.549309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.549343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.549629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.549675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.549972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.550007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.550195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.550229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.550537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.550581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.550794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.550829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.551032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.551066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.551270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.551304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.551512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.551546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.551778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.551813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.552094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.552128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.552405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.552439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.552698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.552733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.553035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.553069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.553183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.553214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.553472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.553506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.553788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.553823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-12-12 10:40:39.554039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-12-12 10:40:39.554072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.554353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.554386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.554672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.554708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.554897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.554931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.555149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.555184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.555462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.555496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.555784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.555818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.556095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.556130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.556337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.556371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.556640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.556676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.556879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.556914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.557112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.557153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.557460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.557494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.557679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.557715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.557998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.558032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.558213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.558248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.558429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.558463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.558684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.558719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.558920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.558955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.559255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.559289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.559582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.559617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.559911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.559945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.560128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.560162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.560444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.560477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.560733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.560767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.560990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.561025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.561288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.561321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.561622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.561658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.561918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.561952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.562202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.562236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.562452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.562487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.562763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.562798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.563067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.563101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.563245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.563279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.563530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.563564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.563789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-12-12 10:40:39.563823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-12-12 10:40:39.564099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.564133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.564412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.564446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.564732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.564767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.565044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.565078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.565194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.565227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.565409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.565442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.565671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.565706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.565891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.565924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.566057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.566088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.566341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.566374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.566585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.566620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.566903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.566937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.567211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.567243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.567429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.567462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.567732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.567767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.568022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.568062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.568316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.568350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.568543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.568587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.568775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.568808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.569015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.569049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.569253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.569288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.569486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.569520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.569755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.569790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.570005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.570039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.570323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.570356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.570606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.570643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.570925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.570959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.571266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.571300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.571555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.571600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.571889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.571923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.572112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.572146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.572274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.572308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.572590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.572625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.572899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.572933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.573191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.573224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.573421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.573454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.573652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.573687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.573957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.573991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.574272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-12-12 10:40:39.574306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-12-12 10:40:39.574606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.574642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.574906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.574939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.575196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.575229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.575419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.575453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.575676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.575711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.575904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.575938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.576130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.576163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.576423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.576458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.576652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.576687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.576877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.576910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.577202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.577236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.577489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.577524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.577673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.577707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.577929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.577962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.578155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.578190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.578392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.578426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.578613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.578653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.578851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.578885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.579071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.579105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.579310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.579345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.579545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.579590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.579850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.579885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.580085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.580118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.580323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.580358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.580626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.580661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.580942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.580977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.581259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.581294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.581408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.581442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.581718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.581753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.582032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.582066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.582283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.582318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.582606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.582642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.582849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.582884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.583112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.583146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.583417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.583451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.583711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.583745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.583870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.583904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.584182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.584217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.584348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.584381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-12-12 10:40:39.584605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-12-12 10:40:39.584640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.584834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.584869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.585123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.585157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.585352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.585386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.585590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.585625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.585890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.585924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.586125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.586159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.586417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.586451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.586707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.586743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.587050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.587084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.587280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.587314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.587495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.587529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.587738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.587774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.588049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.588083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.588308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.588342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.588522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.588556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.588844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.588878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.589154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.589194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.589451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.589485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.589783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.589819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.590090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.590123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.590384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.590418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.590719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.590753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.591016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.591049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.591346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.591379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.591606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.591641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.591841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.591875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.592151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.592186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.592328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.592363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.592616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.592651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.592840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.592874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.593064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.593099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.593292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.593328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.593550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.593597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.593819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.593854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.594105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.594140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.594402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.594436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.594675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.594711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.594992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.595026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-12-12 10:40:39.595320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-12-12 10:40:39.595355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.595551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.595595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.595780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.595814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.595958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.595993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.596237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.596271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.596529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.596563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.596831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.596866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.597144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.597178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.597453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.597487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.597674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.597709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.597902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.597938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.598123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.598159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.598441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.598475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.598735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.598772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.598887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.598921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.599200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.599235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.599387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.599421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.599716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.599751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.600020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.600062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.600273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.600310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.600511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.600545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.600820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.600855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.601168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.601204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.601486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.601521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.601653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.601689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.601965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.602001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.602211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.602245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.602494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.602529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.602726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.602762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.602970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.603004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.603210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.603245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.603513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.603551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.603794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.603830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.604041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.604075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.604293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.604327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-12-12 10:40:39.604523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-12-12 10:40:39.604558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.604853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.604888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.605095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.605129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.605412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.605446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.605658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.605693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.605948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.605982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.606168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.606202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.606456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.606491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.606622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.606658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.606860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.606894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.607100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.607137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.607342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.607376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.607592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.607629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.607829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.607864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.608140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.608175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.608397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.608432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.608654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.608690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.608948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.608982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.609115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.609149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.609335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.609370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.609556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.609603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.609879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.609914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.610220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.610255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.610462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.610502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.610776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.610812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.611041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.611075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.611384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.611418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.611645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.611681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.611881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.611916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.612049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.612083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.612280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.612314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.612567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.612612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.612766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.612800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.613061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.613095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.613386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.613421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.613692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.613728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.613974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.614009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.614266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.614302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.614497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.614532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.614746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-12-12 10:40:39.614782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-12-12 10:40:39.615036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.615070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.615272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.615307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.615620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.615656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.615917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.615950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.616227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.616262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.616402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.616437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.616623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.616659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.616868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.616903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.617107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.617142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.617323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.617359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.617578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.617615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.617895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.617930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.618047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.618083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.618379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.618414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.618666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.618702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.619010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.619046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.619321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.619356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.619616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.619651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.619871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.619906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.620091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.620126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.620390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.620424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.620624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.620661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.620925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.620961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.621239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.621279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.621497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.621533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.621680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.621716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.621992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.622027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.622281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.622316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.622626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.622662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.622946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.622981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.623279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.623313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.623499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.623534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.623710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.623746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.623885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.623919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.624037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.624072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.624272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.624306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.624490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.624526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.624828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.624864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-12-12 10:40:39.625007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-12-12 10:40:39.625043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.625250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.625286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.625470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.625504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.625658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.625694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.625833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.625868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.626155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.626191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.626382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.626416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.626527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.626560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.626721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.626756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.626939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.626975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.627172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.627208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.627402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.627436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.627664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.627702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.627897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.627932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.628118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.628153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.628349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.628385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.628585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.628621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.628899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.628934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.629132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.629166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.629351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.629384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.629590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.629626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.629882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.629918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.630145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.630180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.630369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.630404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.630518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.630552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.630716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.630759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.630948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.630982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.631172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.631208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.631408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.631443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.631627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.631665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.631849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.631884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.632135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.632170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.632388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.632424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.632556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.632604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.632791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.632826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.633132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.633166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.633354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.633389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.633540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.633583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-12-12 10:40:39.633848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-12-12 10:40:39.633883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.937 [2024-12-12 10:40:39.634144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.937 [2024-12-12 10:40:39.634180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.937 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.634437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.634473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.634667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.634703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.634831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.634866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.635001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.635036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.635304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.635339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.635482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.635527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.635677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.635714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.635913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.635947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.636131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.636166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.636298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.636333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.636449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.636484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.636704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.636741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.637011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.637092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.637302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.637341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.637549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.637595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.637804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.637840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.638033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.638068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.638269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.638303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.638433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.638469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.638649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.638694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.638824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.638859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.638986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.639021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.639160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.639195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.639401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.639436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.639641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.639678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.639809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.639844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.639992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.640027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.640230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.640265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.640470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.640505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.640704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.640740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.640944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.640981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.641170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.641207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.641327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.641363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.641548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.641598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.641788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.641825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.642033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.642069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.642197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.642233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.642359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.642402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.938 qpair failed and we were unable to recover it. 00:27:05.938 [2024-12-12 10:40:39.642684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.938 [2024-12-12 10:40:39.642724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.642919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.642959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.643113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.643146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.643349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.643383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.643583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.643620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.643863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.643897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.644091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.644126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.644332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.644366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.644482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.644513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.644654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.644687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.644827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.644858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.645066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.645100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.645287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.645321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.645607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.645642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.645755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.645790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.645935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.645969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.646098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.646132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.646317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.646351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.646541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.646596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.646785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.646819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.647033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.647067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.647187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.647222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.647423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.647457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.647677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.647713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.647990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.648024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.648155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.648188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.648390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.648424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.648637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.648673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.648930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.648965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.649158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.649191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.649423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.649457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.649656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.649691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.649806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.649840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.649954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.649988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.650260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.650295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.650481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.650515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.650654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.650690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.650908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.650941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.651117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.939 [2024-12-12 10:40:39.651150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.939 qpair failed and we were unable to recover it. 00:27:05.939 [2024-12-12 10:40:39.651344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.651377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.651592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.651629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.651818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.651858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.652039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.652072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.652257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.652292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.652565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.652610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.652873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.652907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.653190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.653224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.653527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.653561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.653803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.653838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.654034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.654068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.654200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.654234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.654420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.654455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.654712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.654747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.654874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.654909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.655092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.655127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.655323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.655358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.655561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.655606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.655794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.655827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.656089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.656123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.656417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.656451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.656656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.656692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.656943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.656977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.657108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.657143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.657330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.657365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.657546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.657589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.657724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.657758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.658036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.658071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.658266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.658299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.658595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.658631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.658819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.658853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.659075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.659108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.659299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.659333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.659521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.659554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.659822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.659856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.660059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.660092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.660284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.660317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.660594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.660628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.660747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.660781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.940 qpair failed and we were unable to recover it. 00:27:05.940 [2024-12-12 10:40:39.661030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.940 [2024-12-12 10:40:39.661065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.661263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.661297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.661567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.661612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.661902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.661941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.662136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.662171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.662380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.662414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.662613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.662649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.662838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.662873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.663052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.663086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.663332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.663367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.663592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.663628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.663889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.663923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.664209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.664243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.664518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.664552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.664764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.664798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.665076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.665110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.665288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.665324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.665467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.665501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.665774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.665809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.665993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.666027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.666296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.666331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.666542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.666589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.666728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.666763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.667010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.667044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.667242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.667277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.667557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.667601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.667880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.667917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.668038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.668072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.668182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.668219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.668441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.668476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.668666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.668703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.668946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.668980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.669113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.669148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.669443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.669477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.669730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.669767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.670026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.670061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.670261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.670296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.670500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.670535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.670839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.670875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.941 qpair failed and we were unable to recover it. 00:27:05.941 [2024-12-12 10:40:39.671073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.941 [2024-12-12 10:40:39.671107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.671403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.671436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.671675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.671711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.671968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.672002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.672200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.672241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.672505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.672540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.672687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.672723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.672946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.672980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.673232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.673265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.673540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.673586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.673865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.673899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.674127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.674162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.674408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.674444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.674723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.674759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.674909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.674944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.675076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.675110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.675318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.675352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.675583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.675618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.675824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.675858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.676040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.676074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.676285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.676318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.676567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.676611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.676839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.676873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.677127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.677160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.677417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.677451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.677655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.677690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.677893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.677927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.678136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.678170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.678427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.678461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.678724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.678761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.679059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.679092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.679313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.679349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.679550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.679597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.679849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.942 [2024-12-12 10:40:39.679883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.942 qpair failed and we were unable to recover it. 00:27:05.942 [2024-12-12 10:40:39.680138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.680172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.680371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.680406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.680687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.680723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.681004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.681038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.681233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.681269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.681466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.681501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.681703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.681738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.681873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.681908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.682132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.682166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.682456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.682491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.682768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.682811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.682999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.683033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.683316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.683350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.683554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.683614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.683889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.683924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.684133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.684167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.684444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.684478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.684731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.684767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.685025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.685059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.685361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.685395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.685688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.685724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.685953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.685988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.686200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.686234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.686537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.686598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.686878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.686913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.687227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.687261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.687537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.687585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.687860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.687893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.688167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.688200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.688392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.688427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.688632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.688668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.688871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.688905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.689167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.689203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.689488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.689521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.689833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.689869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.690103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.690137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.690404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.690439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.690729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.690765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.690961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.943 [2024-12-12 10:40:39.690996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.943 qpair failed and we were unable to recover it. 00:27:05.943 [2024-12-12 10:40:39.691293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.691328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.691535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.691580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.691854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.691889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.692190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.692224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.692503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.692538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.692756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.692789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.693047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.693081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.693285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.693320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.693434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.693468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.693738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.693774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.693913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.693948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.694213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.694252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.694448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.694483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.694765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.694801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.695086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.695121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.695325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.695360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.695589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.695625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.695761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.695795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.696000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.696034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.696320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.696353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.696633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.696668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.696869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.696902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.697112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.697146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.697399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.697433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.697636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.697672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.697870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.697905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.698089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.698123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.698404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.698439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.698661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.698697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.698976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.699010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.699294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.699328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.699527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.699561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.699854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.699889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.700144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.700178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.700459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.700493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.700686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.700722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.700997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.701031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.701318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.701352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.701631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.944 [2024-12-12 10:40:39.701667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.944 qpair failed and we were unable to recover it. 00:27:05.944 [2024-12-12 10:40:39.701871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.701905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.702161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.702195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.702379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.702413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.702694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.702730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.703030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.703064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.703305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.703339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.703622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.703658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.703858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.703892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.704151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.704186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.704387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.704422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.704608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.704645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.704782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.704816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.705008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.705049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.705234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.705268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.705584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.705620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.705875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.705910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.706111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.706145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.706399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.706433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.706651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.706688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.706961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.706995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.707246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.707280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.707425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.707460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.707737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.707773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.708013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.708047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.708254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.708287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.708506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.708540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.708813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.708850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.709112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.709147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.709335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.709369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.709567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.709611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.709801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.709835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.710125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.710159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.710412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.710447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.710702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.710738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.710872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.710908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.711186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.711220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.711517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.711553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.711773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.711808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.712031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.712065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.945 qpair failed and we were unable to recover it. 00:27:05.945 [2024-12-12 10:40:39.712275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.945 [2024-12-12 10:40:39.712310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.712437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.712472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.712747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.712781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.712996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.713030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.713235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.713269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.713579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.713615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.713854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.713887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.714080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.714113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.714399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.714431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.714690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.714726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.715022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.715057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.715257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.715291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.715545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.715588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.715793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.715832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.716105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.716138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.716419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.716452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.716650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.716685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.716866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.716900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.717087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.717119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.717344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.717378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.717632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.717667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.717964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.717998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.718216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.718249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.718432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.718466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.718730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.718767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.719041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.719075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.719330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.719363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.719582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.719618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.719822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.719856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.720105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.720138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.720418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.720452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.720729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.720765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.720904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.720937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.721137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.721171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.721318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.721352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.946 [2024-12-12 10:40:39.721624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.946 [2024-12-12 10:40:39.721659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.946 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.721860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.721894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.722078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.722112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.722391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.722423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.722608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.722643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.722889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.722925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.723072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.723106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.723376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.723409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.723710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.723745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.723954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.723988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.724204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.724237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.724458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.724492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.724689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.724725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.724977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.725010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.725261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.725295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.725409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.725443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.725635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.725671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.725951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.725985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.726239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.726278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.726584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.726619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.726900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.726934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.727120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.727154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.727417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.727450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.727670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.727706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.727960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.727994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.728144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.728178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.728372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.728405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.728600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.728635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.728821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.728854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.729140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.729174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.729381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.729415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.729718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.729753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.729961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.729995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.730143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.730176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.730459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.730493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.730744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.730779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.731033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.731067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.731248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.731282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.731429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.731462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.947 [2024-12-12 10:40:39.731611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.947 [2024-12-12 10:40:39.731647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.947 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.731841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.731875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.732095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.732128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.732379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.732413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.732605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.732642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.732922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.732955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.733232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.733266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.733560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.733605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.733843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.733876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.734128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.734161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.734456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.734490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.734764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.734799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.735008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.735041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.735247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.735281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.735555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.735602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.735742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.735776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.735905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.735939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.736152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.736186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.736460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.736493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.736781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.736823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.737013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.737047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.737242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.737275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.737556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.737608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.737832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.737865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.738139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.738173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.738432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.738465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.738766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.738801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.739059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.739092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.739296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.739328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.739608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.739643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.739829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.739863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.740056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.740089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.740363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.740396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.740624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.740661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.740843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.740877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.741130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.741163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.741368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.741402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.741675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.741710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.741855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.741889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.742105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.948 [2024-12-12 10:40:39.742138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.948 qpair failed and we were unable to recover it. 00:27:05.948 [2024-12-12 10:40:39.742323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.742357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.742637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.742672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.742952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.742985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.743247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.743280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.743466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.743500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.743703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.743738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.744019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.744054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.744243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.744277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.744481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.744515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.744772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.744808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.745025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.745058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.745188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.745221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.745514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.745547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.745863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.745898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.746054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.746088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.746378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.746412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.746558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.746607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.746796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.746829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.747141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.747174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.747451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.747490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.747746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.747782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.748025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.748059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.748262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.748295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.748499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.748534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.748692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.748728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.748960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.748994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.749190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.749223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.749482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.749515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.749723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.749758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.749942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.749976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.750166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.750200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.750405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.750438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.750592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.750626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.750781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.750818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.751012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.751046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.751266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.751300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.751434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.751467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.751726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.751761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.751952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.949 [2024-12-12 10:40:39.751985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.949 qpair failed and we were unable to recover it. 00:27:05.949 [2024-12-12 10:40:39.752313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.752348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.752557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.752603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.752868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.752902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.753110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.753142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.753412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.753446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.753645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.753679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.753809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.753842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.754040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.754075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.754347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.754380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.754601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.754636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.754822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.754856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.755056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.755089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.755306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.755339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.755635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.755671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.755882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.755915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.756191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.756224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.756513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.756548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.756675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.756708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.756929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.756962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.757167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.757201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.757473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.757512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.757846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.757881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.758017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.758050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.758314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.758349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.758630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.758667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.758874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.758912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.759124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.759156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.759342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.759375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.759682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.759719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.759862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.759896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.760148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.760181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.760365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.760399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.760606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.760642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.760863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.760896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.761182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.761216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.761399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.761433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.761700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.761735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.761935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.761969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.762230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.950 [2024-12-12 10:40:39.762264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.950 qpair failed and we were unable to recover it. 00:27:05.950 [2024-12-12 10:40:39.762542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.762585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.762717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.762750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.762876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.762907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.763183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.763217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.763445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.763480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.763709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.763745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.763966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.763999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.764207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.764241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.764594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.764673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.764957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.764995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.765208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.765244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.765500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.765535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.765804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.765840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.766121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.766155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.766305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.766339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.766545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.766589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.766803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.766837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.766984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.767019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.767288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.767322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.767580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.767615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.767813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.767846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.767991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.768041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.768272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.768305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.768500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.768535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.768829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.768865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.769069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.769102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.769391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.769425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.769682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.769720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.770026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.770060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.770303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.770337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.770608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.770643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.770928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.770962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.771206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.771240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.771443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.771478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.771735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.951 [2024-12-12 10:40:39.771770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.951 qpair failed and we were unable to recover it. 00:27:05.951 [2024-12-12 10:40:39.771919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.771954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.772151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.772184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.772464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.772498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.772822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.772857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.773149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.773184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.773401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.773435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.773692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.773727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.773993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.774027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.774309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.774345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.774624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.774659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.774815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.774849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.775049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.775085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.775342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.775377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.775680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.775715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.775945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.775978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.776233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.776268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.776544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.776587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.776807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.776841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.777129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.777163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.777363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.777400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.777660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.777696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.777901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.777938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.778254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.778291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.778581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.778618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.778805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.778840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.779102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.779138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.779365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.779399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.779712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.779748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.779942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.779977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.780296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.780331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.780607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.780643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.780843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.780878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.781078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.781112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.781343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.781377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.781636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.781671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.781964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.781999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.782198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.782233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.782377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.782412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.782713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.952 [2024-12-12 10:40:39.782748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.952 qpair failed and we were unable to recover it. 00:27:05.952 [2024-12-12 10:40:39.782968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.783002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.783260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.783296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.783497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.783531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.783833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.783869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.784081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.784115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.784249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.784283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.784430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.784463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.784744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.784780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.784985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.785020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.785222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.785257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.785451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.785486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.785766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.785803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.786078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.786114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.786394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.786429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.786613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.786656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.786842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.786876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.787031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.787066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.787336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.787372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.787690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.787725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.787940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.787976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.788161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.788195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.788356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.788389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.788703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.788739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.788928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.788963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.789178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.789213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.789414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.789448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.789707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.789744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.789859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.789891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.790203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.790236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.790485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.790519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.790778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.790813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.790947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.790979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.791275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.791308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.791581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.791616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.791850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.791883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.792187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.792222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.792378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.792412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.792746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.792782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.793057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.793091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-12-12 10:40:39.793313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-12-12 10:40:39.793348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.793615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.793650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.793866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.793900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.794108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.794142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.794346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.794381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.794636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.794671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.794807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.794840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.795026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.795061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.795270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.795305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.795418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.795450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.795673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.795708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.795962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.795997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.796309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.796343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.796481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.796516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.796662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.796697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.796953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.796993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.797137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.797171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.797364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.797398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.797694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.797730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.797932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.797966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.798273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.798307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.798493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.798526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.798787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.798822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.799122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.799155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.799359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.799393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.799669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.799705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.799906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.799940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.800058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.800092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.800362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.800396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.800681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.800717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.800944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.800978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.801199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.801233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.801488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.801522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.801838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.801872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.802007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.802044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.802338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.802371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.802581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.802617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.802869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.802903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.803129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.803162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-12-12 10:40:39.803351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-12-12 10:40:39.803385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.803635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.803670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.803863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.803896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.804080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.804114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.804310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.804344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.804464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.804498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.804700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.804735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.804955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.804989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.805207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.805241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.805370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.805405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.805695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.805730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.805929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.805963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.806104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.806138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.806409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.806442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.806659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.806695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.806880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.806914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.807174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.807213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.807464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.807498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.807723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.807759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.808031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.808065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.808232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.808265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.808467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.808501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.808775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.808809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.809010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.809045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.809225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.809259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.809375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.809409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.809704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.809738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.809858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.809892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.810083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.810117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.810241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.810275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.810499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.810533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.810729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.810764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.810980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.811013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.811162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.811196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-12-12 10:40:39.811442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-12-12 10:40:39.811477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.811663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.811698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.811828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.811861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.812061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.812095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.812387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.812420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.812623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.812657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.812797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.812832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.812947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.812980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.813157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.813190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.813416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.813450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.813718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.813753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.813862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.813896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.814092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.814125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.814324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.814358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.814551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.814594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.814788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.814821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.814998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.815032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.815211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.815245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.815356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.815390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.815593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.815627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.815820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.815854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.815973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.816008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.816114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.816154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.816337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.816370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.816559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.816604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.816808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.816843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.817113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.817147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.817401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.817434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.817591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.817626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.817736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.817767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.817892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.817925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.818172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.818205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.818335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.818369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.818591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.818625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.818804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.818837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.818967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.819000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.819191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.819224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.819489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.819523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.819713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.819747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-12-12 10:40:39.819889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-12-12 10:40:39.819922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.820041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.820074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.820215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.820249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.820429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.820462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.820648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.820684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.820951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.820985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.821167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.821201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.821316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.821349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.821626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.821661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.821929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.821963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.822097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.822131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.822251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.822285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.822584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.822619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.822866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.822899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.823029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.823064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.823191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.823224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.823472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.823506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.823652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.823688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.823869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.823902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.824111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.824145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.824432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.824466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.824662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.824698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.824879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.824913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.825107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.825146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.825295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.825328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.825515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.825548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.825712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.825746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.825865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.825898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.826022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.826055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.826275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.826308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.826500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.826532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.826685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.826720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.826906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.826940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.827150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.827184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.827431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.827466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.827651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.827687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.827799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.827833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-12-12 10:40:39.828021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-12-12 10:40:39.828055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.828326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.828360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.828633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.828668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.828874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.828909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.829100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.829134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.829316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.829350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.829605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.829639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.829842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.829875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.829997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.830030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.830156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.830189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.830390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.830423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.830612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.830646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.830827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.830860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.831020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.831053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.831311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.831344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.831465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.831498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.831770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.831804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.831989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.832022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.832218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.832252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.832370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.832403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.832607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.832643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.832840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.832874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.833067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.833101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.833283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.833317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.833602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.833636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.833762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.833796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.833984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.834023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.834302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.834335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.834483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.834517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.834656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.834691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.834889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.834923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.835169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.835202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.835384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.835418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.835618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.835653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.835859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.835892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.836077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.836111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.836286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.836321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.836537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.836583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.836706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.836740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.836922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-12-12 10:40:39.836956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-12-12 10:40:39.837166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.837199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.837379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.837414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.837612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.837648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.837826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.837860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.838049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.838082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.838351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.838384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.838584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.838619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.838879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.838912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.839111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.839144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.839433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.839484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.839784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.839819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.839956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.839989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.840194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.840228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.840371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.840404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.840651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.840685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.840895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.840930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.841072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.841106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.841237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.841271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.841462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.841495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.841677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.841713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.841894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.841928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.842135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.842169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.842364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.842397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.842584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.842619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.842912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.842946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.843149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.843182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.843326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.843373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.843557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.843601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.843777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.843811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.843996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.844029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.844222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.844256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.844450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.844483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.844660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.844696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.844893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.844926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.845123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.845156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.845403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.845436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.845687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.845722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.845929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.845962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.846180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-12-12 10:40:39.846213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-12-12 10:40:39.846482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.846515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.846777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.846812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.847081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.847115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.847243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.847275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.847394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.847427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.847616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.847650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.847838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.847871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.848064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.848098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.848236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.848269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.848471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.848504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.848701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.848734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.848912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.848946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.849139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.849173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.849349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.849382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.849522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.849556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.849757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.849790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.849915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.849948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.850123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.850156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.850360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.850394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.850596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.850632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.850763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.850797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.851066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.851099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.851278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.851311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.851431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.851464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.851656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.851690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.851816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.851849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.852116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.852149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.852347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.852386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.852583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.852617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.852865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.852898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.853099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.853131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.853247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.853280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.853521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.853554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-12-12 10:40:39.853700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-12-12 10:40:39.853732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.854019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.854052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.854255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.854288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.854500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.854533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.854743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.854778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.854955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.854988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.855177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.855209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.855405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.855438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.855620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.855655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.855851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.855883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.856125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.856158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.856353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.856386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.856585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.856619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.856739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.856772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.856901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.856933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.857066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.857099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.857328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.857361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.857505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.857537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.857674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.857708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.857895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.857927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.858053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.858085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.858265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.858299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.858474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.858506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.858629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.858663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.858841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.858873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.859164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.859197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.859388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.859421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.859613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.859647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.859835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.859868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.860003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.860034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.860211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.860244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.860351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.860384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.860525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.860557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.860692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.860725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.860970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.861008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.861147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.861178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.861363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.861395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.861576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.861610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.861742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.861774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.861895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-12-12 10:40:39.861927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-12-12 10:40:39.862122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.862154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.862341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.862374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.862495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.862527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.862780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.862813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.862990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.863023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.863263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.863296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.863491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.863524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.863816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.863850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.863986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.864020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.864216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.864248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.864504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.864536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.864655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.864690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.864864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.864896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.865026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.865059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.865251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.865285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.865587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.865621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.865758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.865790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.865982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.866014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.866233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.866266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.866441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.866474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.866591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.866624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.866758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.866791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.866995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.867027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.867170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.867204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.867382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.867415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.867539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.867588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.867836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.867869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.867981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.868015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.868237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.868271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.868391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.868424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.868546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.868586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.868780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.868814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.869082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.869114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.869306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.869339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.869583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.869623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.869830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.869862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.870120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.870153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.870395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.870428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-12-12 10:40:39.870622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-12-12 10:40:39.870656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.870901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.870934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.871199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.871232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.871368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.871401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.871611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.871647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.871870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.871902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.872085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.872118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.872380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.872412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.872639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.872673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.872851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.872884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.873098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.873132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.873375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.873408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.873619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.873653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.873829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.873861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.874055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.874088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.874281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.874314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.874559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.874602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.874804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.874838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.875040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.875072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.875188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.875222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.875395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.875428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.875557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.875603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.875729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.875762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.876028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.876103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.876304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.876340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.876540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.876585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.876711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.876744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.876871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.876905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.877035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.877067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.877179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.877212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.877476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.877509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.877708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.877741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.877957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.877990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.878176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.878209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.878398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.878430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.878553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.878595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.878861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.878904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.879081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.879114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.879309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.879342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-12-12 10:40:39.879556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-12-12 10:40:39.879603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.879862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.879896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.880078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.880110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.880293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.880326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.880460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.880494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.880629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.880665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.880844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.880876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.881079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.881112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.881381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.881414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.881536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.881578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.881793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.881825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.881962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.881996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.882129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.882161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.882402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.882436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.882616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.882650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.882891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.882923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.883109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.883142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.883324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.883357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.883498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.883531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.883682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.883716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.884002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.884035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.884215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.884247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.884510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.884543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.884792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.884826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.885067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.885141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.885437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.885475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.885689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.885725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.885901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.885935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.886049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.886082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.886261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.886293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.886478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.886510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.886663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.886697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.886984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.887017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.887158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.887191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-12-12 10:40:39.887316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-12-12 10:40:39.887349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.887474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.887506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.887714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.887748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.887991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.888035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.888143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.888176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.888350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.888382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.888593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.888628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.888807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.888839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.889053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.889085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.889258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.889290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.889471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.889504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.889623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.889658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.889839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.889873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.889991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.890023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.890301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.890333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.890524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.890556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.890757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.890789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.890986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.891019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.891210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.891242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.891356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.891389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.891651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.891684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.891798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.891830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.891952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.891985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.892186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.892218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.892415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.892449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.892642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.892676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.892849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.892882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.893070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.893103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.893395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.893427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.893561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.893604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.893736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.893773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.893971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.894004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.894214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.894247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.894436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.894469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.894682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.894718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.894826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.894860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.895045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.895078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.895252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.895284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.895466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.895499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-12-12 10:40:39.895610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-12-12 10:40:39.895645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.895766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.895798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.895921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.895954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.896216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.896249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.896361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.896393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.896588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.896623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.896824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.896856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.897031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.897063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.897190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.897222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.897477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.897509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.897693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.897727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.897867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.897899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.898141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.898174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.898442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.898473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.898659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.898693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.898883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.898916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.899096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.899129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.899247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.899280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.899526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.899559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.899757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.899791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.899968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.900002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.900195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.900228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.900416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.900449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.900635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.900669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.900845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.900878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.900982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.901014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.901197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.901230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.901495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.901528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.901792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.901825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.901944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.901977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.902120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.902153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.902410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.902449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.902590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.902624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.902800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.902833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.903013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.903047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.903165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.903197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.903302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.903334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.903610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.903644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.903834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.903866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-12-12 10:40:39.904059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-12-12 10:40:39.904092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.904303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.904336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.904546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.904590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.904770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.904802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.904979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.905012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.905128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.905160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.905376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.905409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.905584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.905618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.905912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.905945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.906071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.906103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.906293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.906326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.906531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.906564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.906703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.906735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.907002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.907034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.907145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.907178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.907346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.907378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.907504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.907537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.907733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.907768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.907967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.907999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.908189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.908221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.908463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.908496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.908674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.908710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.908899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.908932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.909110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.909142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.909351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.909383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.909561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.909601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.909839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.909872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.910128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.910161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.910383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.910415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.910605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.910639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.910821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.910853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.911048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.911082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.911211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.911250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.911463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.911495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.911687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.911721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.911987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.912019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.912281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.912314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.912446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.912479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.912651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.912686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.912880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-12-12 10:40:39.912913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-12-12 10:40:39.913047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.913080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.913186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.913218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.913453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.913486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.913599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.913633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.913819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.913851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.914124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.914157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.914455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.914488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.914725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.914759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.914878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.914909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.915102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.915136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.915419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.915451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.915585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.915620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.915739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.915772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.915880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.915912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.916094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.916126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.916305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.916338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.916583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.916617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.916737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.916770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.916939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.916973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.917179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.917212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.917320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.917353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.917616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.917650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.917885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.917918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.918035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.918068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.918250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.918282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.918454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.918487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.918727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.918761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.918952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.918984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.919163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.919196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.919328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.919361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.919490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.919522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.919662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.919697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.919879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.919917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.920110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.920141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.920254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.920287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.920392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.920424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.920558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.920601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.920801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.920834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.921015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-12-12 10:40:39.921046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-12-12 10:40:39.921163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.921196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.921303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.921335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.921508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.921541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.921667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.921700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.921813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.921845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.921966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.921998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.922247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.922279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.922389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.922422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.922603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.922637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.922814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.922846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.922967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.923000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.923238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.923269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.923453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.923487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.923612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.923647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.923837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.923870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.924157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.924189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.924308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.924341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.924453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.924485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.924611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.924645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.924829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.924861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.925051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.925084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.925203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.925235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.925363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.925397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.925506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.925539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.925681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.925715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.925833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.925866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.926045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.926078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.926256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.926289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.926408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.926441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.926629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.926663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.926770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.926802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.927042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.927075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.927315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.927347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.927521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.927597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-12-12 10:40:39.927840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-12-12 10:40:39.927874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.927993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.928026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.928223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.928255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.928427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.928460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.928638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.928673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.928936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.928968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.929098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.929131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.929318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.929351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.929535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.929568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.929692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.929725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.929910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.929943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.930065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.930097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.930216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.930248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.930444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.930476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.930606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.930641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.930816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.930849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.931042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.931074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.931335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.931368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.931491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.931524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.931654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.931686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.931928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.931961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.932202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.932236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.932420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.932452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.932699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.932733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.932907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.932939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.933198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.933230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.933430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.933463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.933651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.933686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.933809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.933841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.934010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.934042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.934230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.934263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.934436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.934468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.934730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.934764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.934948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.934981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.935173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.935205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.935328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.935361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.935533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.935566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.935762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.935794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.936000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.936032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-12-12 10:40:39.936162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-12-12 10:40:39.936200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.936419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.936452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.936586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.936620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.936794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.936830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.937057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.937130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.937342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.937378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.937566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.937612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.937740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.937775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.937960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.937993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.938203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.938235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.938339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.938372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.938549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.938589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.938829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.938862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.939045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.939077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.939262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.939295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.939537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.939578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.939788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.939821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.939996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.940028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.940131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.940170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.940416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.940447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.940713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.940748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.940937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.940969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.941139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.941171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.941354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.941386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.941565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.941606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-12-12 10:40:39.941716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-12-12 10:40:39.941748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.941931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.941964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.942174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.942207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.942418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.942452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.942641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.942675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.942790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.942823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.943004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.943037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.943249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.943282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.943453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.943487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.943746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.943780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.943977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.944009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.251 qpair failed and we were unable to recover it. 00:27:06.251 [2024-12-12 10:40:39.944123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.251 [2024-12-12 10:40:39.944156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.944271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.944304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.944474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.944506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.944767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.944802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.944984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.945022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.945147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.945179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.945382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.945416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.945657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.945691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.945931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.945962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.946097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.946130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.946252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.946284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.946413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.946445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.946630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.946664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.946833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.946866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.947151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.947184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.947359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.947390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.947497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.947528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.947796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.947829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.948040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.948072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.948255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.948287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.948462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.948494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.948671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.948705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.948971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.949003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.949242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.949275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.949535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.949567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.949693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.949726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.949994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.950026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.950210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.950243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.950488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.950521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.950644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.950679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.950797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.950829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.951077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.951110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.252 [2024-12-12 10:40:39.951287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.252 [2024-12-12 10:40:39.951320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.252 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.951501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.951534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.951717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.951751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.951925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.951957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.952072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.952105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.952233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.952265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.952447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.952480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.952591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.952627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.952736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.952769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.953073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.953107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.953289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.953321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.953589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.953622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.953884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.953922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.954104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.954137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.954324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.954356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.954567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.954624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.954815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.954848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.954987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.955020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.955140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.955173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.955367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.955399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.955590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.955624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.955822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.955855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.955963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.955997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.956208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.956240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.956429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.956462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.956669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.956703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.956893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.956926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.957212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.957245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.957420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.957452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.957588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.957622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.957857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.957889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.958140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.958173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.958278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.958310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.958492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.958524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-12-12 10:40:39.958708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-12-12 10:40:39.958742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.958978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.959010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.959189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.959221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.959394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.959427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.959614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.959648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.959856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.959888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.960080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.960113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.960232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.960264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.960462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.960493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.960709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.960745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.960983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.961015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.961269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.961302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.961496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.961528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.961725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.961759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.961929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.961960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.962228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.962260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.962370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.962402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.962641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.962675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.962858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.962895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.963136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.963168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.963277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.963309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.963504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.963536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.963673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.963707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.963879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.963912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.964149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.964181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.964352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.964384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.964579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.964612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.964729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.964761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.964967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.964999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.965242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.965274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.965398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.965431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.965703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.965758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.965956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.965989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.966163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.966195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.966304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.966336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-12-12 10:40:39.966508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-12-12 10:40:39.966540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.966720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.966752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.966941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.966973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.967167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.967200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.967328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.967360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.967534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.967566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.967781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.967813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.968001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.968033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.968224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.968256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.968442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.968475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.968762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.968796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.968985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.969017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.969282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.969315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.969518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.969550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.969831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.969864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.970105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.970136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.970254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.970287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.970546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.970606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.970824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.970856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.971038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.971070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.971195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.971228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.971336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.971365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.971495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.971528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.971791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.971831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.972045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.972078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.972268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.972300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.972565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.972609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.972810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.972843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.973034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.973066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.973205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.973238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.973362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.973394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.973630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.973665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.973861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.973894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.974067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-12-12 10:40:39.974100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-12-12 10:40:39.974272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.974304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.974612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.974646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.974772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.974804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.975070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.975103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.975216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.975249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.975423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.975456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.975601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.975634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.975746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.975779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.975881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.975913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.976101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.976133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.976317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.976350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.976522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.976555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.976699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.976732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.976982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.977014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.977190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.977223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.977348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.977380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.977515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.977548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.977796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.977830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.978002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.978035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.978226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.978259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.978382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.978414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.978597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.978631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.978750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.978782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.979030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.979063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.979181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.979213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.979384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.979417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.979607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.979642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.979834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.979867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.980039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.980070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-12-12 10:40:39.980267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-12-12 10:40:39.980305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.980493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.980525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.980719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.980752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.980934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.980967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.981097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.981130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.981393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.981425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.981623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.981661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.981908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.981941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.982149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.982181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.982368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.982400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.982642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.982677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.982947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.982979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.983163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.983197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.983302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.983334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.983549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.983604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.983869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.983900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.984018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.984051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.984338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.984370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.984559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.984602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.984720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.984753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.985000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.985033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.985218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.985250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.985434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.985468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.985657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.985692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.985830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.985862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.986131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.986164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.986269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.986301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.986502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.986536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.986724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.986757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.986993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.987026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.987207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.987239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.987368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.987401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.987600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.987636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.987753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.987785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.987974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.257 [2024-12-12 10:40:39.988007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-12-12 10:40:39.988132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.988164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.988347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.988379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.988501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.988533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.988787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.988820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.988942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.988975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.989239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.989277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.989563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.989607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.989864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.989897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.990019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.990051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.990224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.990255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.990494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.990527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.990729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.990763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.990944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.990976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.991146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.991178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.991416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.991449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.991581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.991615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.991811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.991843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.992020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.992051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.992224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.992256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.992443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.992476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.992648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.992682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.992853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.992886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.993017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.993050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.993156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.993189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.993449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.993482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.993603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.993637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.993763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.993795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.993974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.994007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.994261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.994294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.994489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.994522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.994669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.994702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.994875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.994907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.995048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.995082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-12-12 10:40:39.995251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.258 [2024-12-12 10:40:39.995284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.995546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.995586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.995756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.995789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.995909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.995941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.996059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.996091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.996349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.996381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.996514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.996547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.996762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.996795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.996907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.996938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.997111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.997143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.997269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.997300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.997430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.997463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.997635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.997676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.997877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.997909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.998148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.998181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.998419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.998451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.998598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.998634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.998880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.998913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.999026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.999058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.999306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.999339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.999467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.999499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.999618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.999653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:39.999786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:39.999819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.000003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.000036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.000232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.000265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.000504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.000536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.000818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.000868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.001165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.001212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.001375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.001411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.001613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.001649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.001835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.001868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.002135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.002168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.002290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.002323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.002565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.002615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.002741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.002774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-12-12 10:40:40.003043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.259 [2024-12-12 10:40:40.003076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.003264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.003296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.003560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.003608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.003732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.003764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.003917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.003951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.004123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.004155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.004419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.004452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.004643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.004680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.004852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.004885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.005076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.005109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.005241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.005274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.005462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.005496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.005675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.005711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.005898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.005930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.006139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.006173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.006463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.006496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.006674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.006710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.006915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.006960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.007180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.007223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.007430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.007478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.007688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.007732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.007939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.007985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.008186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.008249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.008462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.008510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.008762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.008819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.009031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.009092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.009375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.009450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.009633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.009673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.009948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.010110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.010427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.010586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.010829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.010890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.011119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.011167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.011303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.011342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.011546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.260 [2024-12-12 10:40:40.011597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.260 qpair failed and we were unable to recover it. 00:27:06.260 [2024-12-12 10:40:40.011789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.011822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.011963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.011996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.012127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.012160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.012291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.012324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.012446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.012479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.012669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.012704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.012835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.012876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.013068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.013104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.013297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.013333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.013533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.013566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.013711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.013753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.013948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.013982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.014178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.014212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.014397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.014432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.014618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.014653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.014847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.014880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.015063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.015096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.015275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.015309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.015439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.015483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.015722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.015774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.015971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.016007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.016136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.016170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.016306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.016339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.016443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.016476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.016624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.016658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.016847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.016880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.017048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.017080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.017212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.017246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.017436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.017469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.017597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.261 [2024-12-12 10:40:40.017631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.261 qpair failed and we were unable to recover it. 00:27:06.261 [2024-12-12 10:40:40.017829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.017862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.018102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.018134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.018241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.018273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.018387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.018421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.018684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.018719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.018842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.018875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.019112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.019145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.019341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.019380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.019563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.019609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.019853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.019886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.019994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.020025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.020270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.020303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.020432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.020464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.020646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.020680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.020797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.020829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.021037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.021070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.021248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.021281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.021390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.021423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.021614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.021647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.021886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.021919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.022090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.022123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.022302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.022335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.022515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.022548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.022742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.022777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.022964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.022996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.023122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.023156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.023338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.023372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.023567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.023613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.023797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.023831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.023951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.023984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.024106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.024138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.024332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.024365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.024627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.024661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.024786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.024818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.025004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.262 [2024-12-12 10:40:40.025048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.262 qpair failed and we were unable to recover it. 00:27:06.262 [2024-12-12 10:40:40.025154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.025187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.025322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.025354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.025618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.025652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.025914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.025947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.026073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.026108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.026292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.026325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.026435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.026468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.026736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.026771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.026889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.026922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.027128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.027160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.027399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.027432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.027617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.027652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.027772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.027805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.028076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.028110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.028248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.028280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.028459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.028492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.028622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.028656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.028843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.028877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.029117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.029149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.029342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.029374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.029551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.029592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.029762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.029795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.029982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.030015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.030301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.030335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.030465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.030508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.030752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.030801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.031027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.031077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.031396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.031444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.031671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.031729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.031970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.032037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.032298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.032337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.032680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.032740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.032943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.033026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.033332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.033404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.033660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.033700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.033844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.033877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.034127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.263 [2024-12-12 10:40:40.034160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.263 qpair failed and we were unable to recover it. 00:27:06.263 [2024-12-12 10:40:40.034273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.034307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.034436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.034469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.034606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.034640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.034827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.034862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.035129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.035162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.035289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.035322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.035505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.035538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.035722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.035755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.035863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.035895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.036124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.036158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.036407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.036440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.036634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.036668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.036861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.036894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.037027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.037061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.037180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.037213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.037392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.037425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.037663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.037704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.037880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.037914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.038089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.038122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.038262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.038295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.038473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.038507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.038701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.038741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.038931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.038965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.039157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.039190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.039377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.039411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.039648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.039683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.039801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.039838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.040054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.040087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.040264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.040297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.040530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.040563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.040770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.040803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.040931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.040964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.041164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.041197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.041390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.041423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.264 [2024-12-12 10:40:40.041634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.264 [2024-12-12 10:40:40.041671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.264 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.041843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.041877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.042006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.042040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.042243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.042278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.042516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.042549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.042768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.042815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.042997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.043030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.043243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.043276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.043513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.043549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.043806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.043840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.044117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.044151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.044390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.044425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.044708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.044743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.044917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.044950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.045138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.045170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.045290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.045324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.045561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.045612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.045819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.045852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.046046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.046079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.046262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.046295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.046541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.046585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.046762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.046800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.046984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.047024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.047162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.047197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.047417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.047451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.047715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.047751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.047925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.047958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.048134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.048168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.048363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.048398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.048601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.048637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.048813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.048847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.048981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.049014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.049194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.049228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.049401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.049433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.049650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.049692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.265 qpair failed and we were unable to recover it. 00:27:06.265 [2024-12-12 10:40:40.049879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.265 [2024-12-12 10:40:40.049915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.050151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.050186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.050375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.050409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.050610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.050645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.050779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.050812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.050992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.051025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.051209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.051246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.051483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.051516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.051768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.051804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.052065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.052116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.052248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.052296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.052487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.052520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.052721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.052755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.052927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.052960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.053162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.053195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.053314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.053352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.053474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.053507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.053697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.053731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.053858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.053891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.054016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.054049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.054240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.054273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.054398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.054431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.054620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.054655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.054832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.054866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.055123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.055156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.055290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.055323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.055495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.055529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.055654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.055694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.055808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.266 [2024-12-12 10:40:40.055841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.266 qpair failed and we were unable to recover it. 00:27:06.266 [2024-12-12 10:40:40.056039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.056073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.056246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.056279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.056393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.056427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.056565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.056609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.056781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.056814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.057028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.057061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.057298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.057332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.057542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.057600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.057863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.057896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.058091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.058124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.058376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.058409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.058594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.058629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.058811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.058844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.058973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.059006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.059199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.059233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.059370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.059404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.059540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.059586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.059796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.059829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.060002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.060037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.060144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.060174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.060363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.060398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.060587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.060623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.060899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.060932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.061128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.061164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.061339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.061372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.061603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.061638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.061825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.061858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.062116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.062149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.062266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.062299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.062475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.062508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.062657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.062692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.062828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.062867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.063003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.063036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.063209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.063242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.267 [2024-12-12 10:40:40.063429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.267 [2024-12-12 10:40:40.063463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.267 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.063591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.063626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.063808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.063841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.063959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.063992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.064254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.064294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.064423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.064456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.064630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.064665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.064848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.064882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.065063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.065096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.065213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.065246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.065384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.065417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.065601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.065636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.065840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.065873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.066008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.066041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.066161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.066193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.066320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.066353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.066542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.066582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.066703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.066736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.066920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.066953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.067192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.067226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.067350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.067383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.067644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.067679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.067809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.067843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.068080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.068114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.068238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.068272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.068540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.068582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.068696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.068730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.068837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.068868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.069066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.069099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.069269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.069302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.069474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.069507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.069723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.069758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.069942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.069975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.070096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.070129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.070337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.070371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.268 [2024-12-12 10:40:40.070647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.268 [2024-12-12 10:40:40.070682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.268 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.070897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.070930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.071119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.071152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.071353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.071385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.071620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.071659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.071899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.071933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.072135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.072168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.072388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.072421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.072564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.072605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.072901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.072939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.073061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.073095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.073358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.073426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.073671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.073743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.074008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.074050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.074239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.074310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.074643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.074711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.074973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.075029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.075351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.075414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.075747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.075819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.076031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.076068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.076196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.076230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.076352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.076385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.076628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.076664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.076791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.076824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.076949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.076982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.077152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.077185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.077429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.077474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.077662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.077698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.077875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.077908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.078012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.078050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.078310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.078343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.078445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.078478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.078664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.078698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.078817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.078851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.078980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.269 [2024-12-12 10:40:40.079013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.269 qpair failed and we were unable to recover it. 00:27:06.269 [2024-12-12 10:40:40.079189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.079222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.079474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.079508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.079713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.079747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.079872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.079906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.080040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.080073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.080282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.080315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.080555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.080598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.080782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.080815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.080956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.080989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.081164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.081197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.081324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.081357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.081477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.081509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.081648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.081683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.081854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.081887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.082121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.082159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.082332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.082365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.082546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.082591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.082698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.082731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.082915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.082948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.083139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.083172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.083434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.083466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.083647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.083682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.083935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.083968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.084108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.084140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.084372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.084405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.084611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.084644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.084847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.084879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.085050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.085083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.085218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.085251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.085389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.085422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.085613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.085648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.085891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.085924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.086040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.086070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.086328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.270 [2024-12-12 10:40:40.086362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.270 qpair failed and we were unable to recover it. 00:27:06.270 [2024-12-12 10:40:40.086587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.086621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.086793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.086826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.086934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.086964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.087233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.087265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.087390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.087423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.087616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.087651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.087843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.087876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.088054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.088122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.088369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.088440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.088649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.088689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.088958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.088992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.089127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.089161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.089294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.089327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.089583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.089618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.089884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.089917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.090103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.090136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.090318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.090352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.090613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.090649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.090906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.090939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.091122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.091156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.091354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.091395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.091634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.091668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.091798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.091831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.091963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.091996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.092173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.092205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.092421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.092453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.092630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.092665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.092904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.092937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.093175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.093208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.093400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.093435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.271 [2024-12-12 10:40:40.093567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.271 [2024-12-12 10:40:40.093624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.271 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.093739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.093770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.093957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.093990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.094227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.094261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.094435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.094469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.094707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.094743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.094988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.095022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.095159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.095192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.095368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.095401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.095586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.095621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.095828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.095861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.096123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.096155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.096346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.096379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.096552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.096593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.096719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.096751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.096927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.096960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.097245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.097278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.097522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.097558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.097780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.097813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.097987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.098020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.098201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.098233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.098501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.098534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.098756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.098791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.098966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.098999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.099212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.099245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.099453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.099487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.099700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.099734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.099910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.099943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.100118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.100151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.100342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.100375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.100485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.100524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.100638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.100672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.100934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.100967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.101239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.101272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.101443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.101475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.272 [2024-12-12 10:40:40.101722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.272 [2024-12-12 10:40:40.101756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.272 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.101933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.101966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.102206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.102238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.102500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.102533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.102756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.102791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.102990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.103023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.103260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.103292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.103410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.103443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.103546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.103584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.103714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.103747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.103938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.103971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.104234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.104267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.104382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.104414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.104592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.104627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.104841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.104874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.105053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.105087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.105270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.105303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.105439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.105472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.105595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.105629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.105764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.105797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.105926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.105958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.106082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.106116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.106414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.106452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.106638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.106674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.106796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.106828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.106962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.106996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.107171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.107204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.107379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.107412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.107591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.107624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.107795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.107829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.108046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.108078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.108202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.108235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.108347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.108380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.108619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.108653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.108907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.108940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.109109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.273 [2024-12-12 10:40:40.109148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.273 qpair failed and we were unable to recover it. 00:27:06.273 [2024-12-12 10:40:40.109322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.109355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.109532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.109565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.109756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.109789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.109905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.109938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.110144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.110177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.110228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c290f0 (9): Bad file descriptor 00:27:06.274 [2024-12-12 10:40:40.110444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.110485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.110726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.110761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.110932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.110965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.111146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.111179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.111439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.111472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.111662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.111696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.111933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.111965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.112088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.112127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.112250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.112284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.112528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.112560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.112834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.112867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.113043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.113076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.113260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.113293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.113425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.113458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.113683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.113718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.113914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.113946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.114130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.114162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.114277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.114309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.114479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.114512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.114660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.114694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.114888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.114920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.115122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.115155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.115326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.115359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.115531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.115564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.115755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.115787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.115994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.116028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.116236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.116270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.116510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.116543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.116682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.116715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.274 qpair failed and we were unable to recover it. 00:27:06.274 [2024-12-12 10:40:40.116840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.274 [2024-12-12 10:40:40.116872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.117064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.117097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.117276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.117309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.117505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.117538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.117745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.117785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.117982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.118016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.118276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.118310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.118582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.118616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.118790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.118823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.119038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.119071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.119262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.119294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.119557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.119602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.119847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.119880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.120138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.120171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.120306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.120339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.120609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.120643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.120823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.120856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.121123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.121156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.121336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.121375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.121654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.121688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.121877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.121911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.122095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.122129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.122392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.122425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.122699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.122734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.122904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.122938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.123053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.123086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.123337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.123371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.123608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.123642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.123830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.123863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.124033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.124067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.124320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.124353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.124540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.124582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.124724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.124758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.124995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.125028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.125151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.125184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.125363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.275 [2024-12-12 10:40:40.125396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.275 qpair failed and we were unable to recover it. 00:27:06.275 [2024-12-12 10:40:40.125521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.125554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.125682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.125716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.125833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.125866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.126040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.126073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.126240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.126273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.126512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.126545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.126761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.126796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.126980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.127013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.127249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.127283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.127406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.127440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.127614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.127649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.127776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.127809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.127993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.128026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.128269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.128302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.128427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.128461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.128641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.128675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.128810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.128843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.129081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.129114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.129376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.129409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.129586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.129621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.129793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.129827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.130071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.130124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.130365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.130404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.130583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.130618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.130746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.130778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.131013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.131045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.131167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.131200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.131386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.131419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.131604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.131638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.131823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.276 [2024-12-12 10:40:40.131855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.276 qpair failed and we were unable to recover it. 00:27:06.276 [2024-12-12 10:40:40.132026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.132059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.132233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.132265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.132475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.132507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.132695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.132730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.132935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.132968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.133252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.133284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.133409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.133443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.133625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.133659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.133843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.133875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.134139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.134172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.134407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.134439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.134721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.134755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.134943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.134975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.135112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.135145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.135373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.135405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.135587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.135621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.135884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.135916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.136056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.136089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.136219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.136251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.136369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.136403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.136615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.136650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.136769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.136802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.136952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.136985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.137107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.137140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.137260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.137293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.137532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.137566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.137685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.137725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.137900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.137933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.138116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.138149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.138324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.138357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.138469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.138503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.138693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.138727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.138911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.138950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.139077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.139110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.139292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.139325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.277 [2024-12-12 10:40:40.139582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.277 [2024-12-12 10:40:40.139616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.277 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.139811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.139844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.139978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.140010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.140215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.140247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.140484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.140517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.140736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.140770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.140880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.140913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.141096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.141129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.141339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.141371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.141635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.141670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.141866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.141898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.142166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.142199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.142460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.142494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.142675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.142710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.142952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.142985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.143189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.143222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.143419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.143453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.143626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.143661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.143784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.143816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.144083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.144116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.144289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.144322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.144428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.144461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.144656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.144691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.144993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.145025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.145272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.145306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.145567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.145613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.145741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.145773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.145960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.145993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.146115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.146148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.146327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.146360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.146480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.146513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.146759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.146793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.147010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.147042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.147216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.147249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.147361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.147394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.147654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.147688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.278 qpair failed and we were unable to recover it. 00:27:06.278 [2024-12-12 10:40:40.147814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.278 [2024-12-12 10:40:40.147847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.148020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.148058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.148247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.148281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.148390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.148423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.148529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.148562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.148751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.148785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.148887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.148920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.149090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.149123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.149290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.149324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.149593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.149628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.149822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.149855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.149961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.149994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.150181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.150214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.150396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.150428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.150633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.150667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.150882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.150916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.151032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.151065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.151236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.151269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.151375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.151408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.151528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.151561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.151763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.151798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.151994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.152026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.152150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.152182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.152473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.152506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.152722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.152757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.152938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.152971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.153103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.153137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.153391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.153423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.153552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.153593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.153718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.153751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.153961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.153995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.154134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.154167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.154295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.154328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.154449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.154483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.154590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.154624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.154738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.154771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.154977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.279 [2024-12-12 10:40:40.155011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.279 qpair failed and we were unable to recover it. 00:27:06.279 [2024-12-12 10:40:40.155121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.155154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.155332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.155365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.155548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.155588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.155805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.155838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.156026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.156065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.156256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.156289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.156528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.156562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.156749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.156782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.156962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.156996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.157106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.157140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.157316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.157350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.157538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.157597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.157777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.157811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.158001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.158034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.158280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.158312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.158483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.158516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.158656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.158692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.158818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.158851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.158975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.159009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.159216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.159250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.159354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.159387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.159588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.159623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.159827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.159860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.159976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.160008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.160130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.160163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.160381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.160415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.160540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.160581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.160771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.160804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.161045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.161077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.161249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.161283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.161525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.161558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.161698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.161733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.161915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.161947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.162129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.162163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.162403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.280 [2024-12-12 10:40:40.162436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.280 qpair failed and we were unable to recover it. 00:27:06.280 [2024-12-12 10:40:40.162653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.162688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.162810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.162843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.163024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.163057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.163182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.163215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.163345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.163378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.163641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.163676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.163801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.163833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.163961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.163994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.164231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.164263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.164392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.164431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.164554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.164594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.164710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.164743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.164981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.165014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.165140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.165173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.165388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.165421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.165639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.165673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.165922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.165955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.166159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.166192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.166445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.166478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.166649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.166684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.166878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.166912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.167107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.167141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.167351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.167385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.167683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.167718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.167887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.167920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.168161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.168193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.168374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.168408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.168623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.168657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.168843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.168876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.169002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.169035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.169301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.281 [2024-12-12 10:40:40.169333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.281 qpair failed and we were unable to recover it. 00:27:06.281 [2024-12-12 10:40:40.169613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.169648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.169833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.169865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.170035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.170067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.170194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.170227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.170485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.170518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.170702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.170737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.170922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.170954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.171144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.171178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.171295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.171327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.171526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.171559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.171698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.171731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.171913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.171946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.172130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.172163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.172292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.172325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.172429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.172463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.172654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.172689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.172885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.172917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.173091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.173124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.173301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.173339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.173530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.173563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.173809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.173841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.174029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.174062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.174321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.174354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.174579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.174613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.174743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.174776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.175017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.175049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.175285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.175318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.175461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.175495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.175618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.175652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.175846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.175879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.176064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.176097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.176276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.176308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.176436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.176469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.176651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.176686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.176819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.282 [2024-12-12 10:40:40.176852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.282 qpair failed and we were unable to recover it. 00:27:06.282 [2024-12-12 10:40:40.177043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.177077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.177318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.177350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.177624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.177659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.177923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.177956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.178129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.178162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.178463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.178496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.178761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.178795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.178986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.179018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.179134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.179166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.179354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.179386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.179500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.179544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.179748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.179782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.179965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.179997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.180179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.180211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.180380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.180413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.180595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.180630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.180871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.180903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.181009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.181040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.181217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.181249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.181374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.181408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.181533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.181566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.181678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.181711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.181884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.181917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.182121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.182154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.182278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.182310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.182433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.182466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.182767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.182801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.183051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.183084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.183261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.183295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.183484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.183516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.183757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.183791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.184004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.184036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.184140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.184172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.184353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.184386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.184651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.184686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.283 qpair failed and we were unable to recover it. 00:27:06.283 [2024-12-12 10:40:40.184924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.283 [2024-12-12 10:40:40.184957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.185125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.185158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.185361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.185393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.185499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.185532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.185789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.185823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.185995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.186027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.186198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.186231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.186445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.186477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.186652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.186686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.186923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.186956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.187144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.187176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.187298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.187331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.187544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.187585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.187827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.187859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.188068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.188101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.188218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.188256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.188442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.188474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.188671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.188706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.188920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.188953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.189126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.189159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.189356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.189388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.189591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.189626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.189841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.189875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.190046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.190078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.190263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.190296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.190408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.190441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.190627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.190661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.190839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.190872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.191076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.191109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.191236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.191268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.191504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.191537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.191732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.191766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.191954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.191987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.192246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.192278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.192464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.192497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.192609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.192643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.284 qpair failed and we were unable to recover it. 00:27:06.284 [2024-12-12 10:40:40.192879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.284 [2024-12-12 10:40:40.192911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.193175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.193207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.193379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.193411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.193670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.193703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.193880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.193912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.194124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.194156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.194335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.194368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.194489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.194522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.194718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.194752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.194936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.194970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.195238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.195271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.195480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.195512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.195779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.195813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.195931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.195963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.196147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.196179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.196349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.196382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.196592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.196627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.196819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.196852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.196980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.197013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.197213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.197251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.197439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.197473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.197647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.197681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.197944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.197976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.198213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.198246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.198437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.198469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.198757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.198791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.198976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.199008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.199125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.199159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.199345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.199378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.199496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.199529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.199755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.199789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.200025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.200057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.200239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.200272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.200495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.200529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.200725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.200759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.200868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.200901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.285 qpair failed and we were unable to recover it. 00:27:06.285 [2024-12-12 10:40:40.201123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.285 [2024-12-12 10:40:40.201156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.201416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.201449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.201687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.201721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.201837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.201869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.202036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.202069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.202285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.202317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.202500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.202534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.202716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.202750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.202990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.203023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.203271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.203303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.203506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.203540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.203745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.203778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.203952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.203985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.204156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.204188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.204377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.204411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.204588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.204622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.204838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.204872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.205058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.205091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.205275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.205307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.205444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.205478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.205659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.205693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.205823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.205856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.206070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.206104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.206374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.206412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.206611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.206646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.206835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.206868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.206990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.207023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.207233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.207265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.207381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.207414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.207617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.207650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.207858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.286 [2024-12-12 10:40:40.207891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.286 qpair failed and we were unable to recover it. 00:27:06.286 [2024-12-12 10:40:40.208013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.208046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.208330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.208362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.208623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.208658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.208793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.208826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.209043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.209076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.209262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.209295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.209491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.209524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.209660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.209695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.209913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.209946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.210136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.210170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.210356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.210390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.210589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.210623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.210740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.210773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.211035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.211067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.211304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.211337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.211552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.211594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.211815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.211850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.211964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.211997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.212177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.212210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.212459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.212493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.212611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.212646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.212775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.212808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.212980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.213013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.213132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.213165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.213336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.213369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.213636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.213671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.213863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.213894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.214089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.214122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.214292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.214325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.214588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.214623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.214875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.214908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.215011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.215043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.215231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.215271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.215482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.215515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.215647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.215682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.215921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.215954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.287 qpair failed and we were unable to recover it. 00:27:06.287 [2024-12-12 10:40:40.216086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.287 [2024-12-12 10:40:40.216118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.216389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.216421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.216593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.216627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.216842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.216873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.216985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.217018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.217191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.217224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.217458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.217492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.217679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.217713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.217900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.217932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.218060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.218093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.218279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.218312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.218501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.218534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.218723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.218757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.218940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.218973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.219155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.219187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.219450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.219484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.219659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.219693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.219812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.219844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.219978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.220011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.220221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.220254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.220491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.220524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.220711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.220744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.220850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.220883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.221007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.221041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.221321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.221354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.221473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.221506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.221751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.221785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.221969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.222002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.222123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.222156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.222349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.222382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.222645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.222679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.222937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.222970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.223150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.223183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.223385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.223418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.223673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.223707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.288 qpair failed and we were unable to recover it. 00:27:06.288 [2024-12-12 10:40:40.223825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.288 [2024-12-12 10:40:40.223858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.223969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.224013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.224207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.224240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.224410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.224443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.224620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.224654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.224842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.224874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.225011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.225044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.225149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.225182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.225370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.225403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.225642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.225676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.225921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.225954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.226199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.226231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.226437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.226470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.226713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.226748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.226964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.226997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.227207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.227240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.227373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.227405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.227510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.227542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.227791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.227863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.228051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.228135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.228281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.228318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.228528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.228560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.228789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.228824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.229005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.229038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.229230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.229277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.229474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.229506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.229720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.229756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.230030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.230063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.230298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.230343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.230606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.230644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.230785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.230819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.231015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.231047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.231147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.231180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.231370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.231403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.289 [2024-12-12 10:40:40.231598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.289 [2024-12-12 10:40:40.231642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.289 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.231915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.231949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.232066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.232099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.232275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.232309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.232444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.232476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.232653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.232688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.232803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.232836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.233101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.233143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.233403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.233435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.233619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.233654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.233776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.233809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.234051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.234084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.234345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.234378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.234497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.234529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.234663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.234697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.234914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.234947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.235136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.235168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.235340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.235372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.235494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.235526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.235674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.235707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.235905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.235937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.236134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.236167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.236380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.236413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.236690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.236724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.236940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.236974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.237161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.237193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.237407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.237439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.237653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.237687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.237894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.237926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.238117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.238149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.238327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.238360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.238644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.238678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.238799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.238832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.238961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.238994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.239176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.290 [2024-12-12 10:40:40.239209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.290 qpair failed and we were unable to recover it. 00:27:06.290 [2024-12-12 10:40:40.239409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.239441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.239645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.239679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.239853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.239886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.240013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.240045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.240178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.240211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.240405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.240438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.240549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.240589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.240788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.240821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.240993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.241025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.241157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.241190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.241367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.241400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.241590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.241624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.241739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.241783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.241982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.242014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.242218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.242251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.242490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.242522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.242706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.242740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.243006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.243039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.243222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.243255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.243443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.243477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1674303 Killed "${NVMF_APP[@]}" "$@" 00:27:06.291 [2024-12-12 10:40:40.243728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.243761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.243874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.243906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:06.291 [2024-12-12 10:40:40.244172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.244205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:06.291 [2024-12-12 10:40:40.244393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.244425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:06.291 [2024-12-12 10:40:40.244606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.244640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.291 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.291 [2024-12-12 10:40:40.244876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.244909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.245121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.245153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.245343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.245376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.291 qpair failed and we were unable to recover it. 00:27:06.291 [2024-12-12 10:40:40.245548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.291 [2024-12-12 10:40:40.245600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.245716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.245749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.245930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.245962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.246149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.246182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.246354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.246386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.246630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.246664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.246850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.246881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.247091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.247123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.247301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.247340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.247532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.247564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.247717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.247749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.247991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.248024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.248143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.248176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.248439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.248471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.248594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.248628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.248843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.248877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.249074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.249106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.249231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.249264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.249445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.249478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.249679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.249713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.249819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.249849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.250085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.250115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.250231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.250261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.250430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.250461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.250609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.250641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1675006 00:27:06.292 [2024-12-12 10:40:40.250917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.250950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.251058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.251090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1675006 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.251258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.251291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1675006 ']' 00:27:06.292 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:06.292 [2024-12-12 10:40:40.251473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.251506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 [2024-12-12 10:40:40.251637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.251670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.292 [2024-12-12 10:40:40.251862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.251894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.292 [2024-12-12 10:40:40.252155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.292 [2024-12-12 10:40:40.252188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.292 qpair failed and we were unable to recover it. 00:27:06.292 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.292 [2024-12-12 10:40:40.252427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.293 [2024-12-12 10:40:40.252461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.293 qpair failed and we were unable to recover it. 00:27:06.293 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.293 [2024-12-12 10:40:40.252729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.293 [2024-12-12 10:40:40.252763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.293 qpair failed and we were unable to recover it. 00:27:06.293 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.293 [2024-12-12 10:40:40.252880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.293 [2024-12-12 10:40:40.252912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.293 qpair failed and we were unable to recover it. 00:27:06.293 [2024-12-12 10:40:40.253219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.293 [2024-12-12 10:40:40.253251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.293 qpair failed and we were unable to recover it. 00:27:06.293 [2024-12-12 10:40:40.253453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.293 [2024-12-12 10:40:40.253485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.293 qpair failed and we were unable to recover it. 00:27:06.293 [2024-12-12 10:40:40.253625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.293 [2024-12-12 10:40:40.253658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.293 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.253831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.253863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.254007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.254039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.254159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.254192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.254374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.254405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.254645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.254680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.254867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.254902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.255120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.255152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.255269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.255301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.255491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.255523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.255651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.255685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.255857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.255890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.256081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.256115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.256319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.256352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.256562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.256607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.256795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.256827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.257008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.257041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.257220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.578 [2024-12-12 10:40:40.257254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.578 qpair failed and we were unable to recover it. 00:27:06.578 [2024-12-12 10:40:40.257515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.257547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.257803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.257836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.258035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.258068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.258306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.258339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.258604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.258638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.258761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.258795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.258915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.258947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.259131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.259163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.259354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.259386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.259586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.259619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.259809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.259841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.259967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.259997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.260187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.260221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.260508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.260540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.260758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.260791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.260913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.260951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.261158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.261190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.261379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.261411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.261599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.261633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.261774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.261807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.261932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.261965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.262206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.262239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.262523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.262555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.262695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.262728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.262939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.262971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.263109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.263141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.263258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.263289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.263496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.263528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.263683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.263718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.264012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.264045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.264221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.264253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.264374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.264406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.264605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.264639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.264858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.264891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.265149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.265182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.265376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.265408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.265618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.265652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.265787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.579 [2024-12-12 10:40:40.265819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.579 qpair failed and we were unable to recover it. 00:27:06.579 [2024-12-12 10:40:40.266054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.266086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.266284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.266316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.266521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.266553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.266749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.266782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.266941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.267013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.267235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.267273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.267443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.267478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.267666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.267702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.267875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.267908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.268094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.268128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.268313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.268346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.268533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.268566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.268758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.268791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.268977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.269009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.269135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.269167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.269360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.269393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.269564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.269608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.269795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.269828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.269950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.269983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.270245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.270279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.270448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.270480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.270598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.270633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.270873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.270905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.271078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.271115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.271294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.271324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.271515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.271548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.271731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.271764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.272003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.272036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.272214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.272246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.272484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.272517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.272721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.272754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.272981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.273019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.273155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.273187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.273384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.273416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.273548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.273589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.273770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.273803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.274042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.274075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.274213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.274245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.274427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.274460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.580 [2024-12-12 10:40:40.274642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.580 [2024-12-12 10:40:40.274676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.580 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.274858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.274890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.275099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.275132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.275254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.275286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.275505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.275538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.275721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.275766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.276012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.276046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.276290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.276326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.276507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.276542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.276693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.276731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.276929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.276964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.277092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.277124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.277296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.277329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.277463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.277495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.277752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.277787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.277982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.278016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.278255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.278292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.278532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.278565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.278814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.278847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.279121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.279155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.279345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.279377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.279564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.279617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.279816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.279852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.279982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.280016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.280202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.280235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.280354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.280387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.280625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.280661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.280857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.280890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.281067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.281102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.281211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.281244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.281492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.281524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.281780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.281814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.281941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.281995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.282181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.282215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.282402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.282435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.282558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.282603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.282753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.282788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.282902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.282935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.283173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.283206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.581 [2024-12-12 10:40:40.283308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.581 [2024-12-12 10:40:40.283341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.581 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.283488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.283524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.283733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.283768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.284032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.284065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.284193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.284225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.284430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.284462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.284660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.284694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.284939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.284973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.285157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.285189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.285320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.285354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.285482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.285515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.285697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.285733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.285919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.285954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.286223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.286257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.286377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.286410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.286545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.286589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.286768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.286801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.286910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.286943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.287135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.287168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.287368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.287402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.287621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.287657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.287844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.287878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.287996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.288031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.288240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.288273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.288461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.288494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.288668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.288702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.288892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.288924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.289126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.289158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.289399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.289431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.289625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.289658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.289773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.289807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.289935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.289967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.290182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.290215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.290413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.290452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.290579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.290611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.290725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.582 [2024-12-12 10:40:40.290758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.582 qpair failed and we were unable to recover it. 00:27:06.582 [2024-12-12 10:40:40.290949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.290982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.291196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.291228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.291397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.291429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.291667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.291700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.291805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.291837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.291966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.291999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.292104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.292137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.292328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.292360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.292543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.292589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.292798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.292831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.293020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.293052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.293247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.293281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.293400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.293432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.293533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.293565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.293692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.293725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.293910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.293942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.294116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.294149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.294264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.294297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.294420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.294452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.294554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.294610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.294724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.294759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.294998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.295032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.295229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.295262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.295372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.295404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.295524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.295556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.295691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.295725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.296017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.296051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.296259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.296291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.296536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.296579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.296845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.296878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.296992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.297024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.297272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.297305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.297422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.297455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.297772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.297807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.297993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.298027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.298161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.298194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.583 [2024-12-12 10:40:40.298373] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:27:06.583 [2024-12-12 10:40:40.298416] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.583 [2024-12-12 10:40:40.298431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.583 [2024-12-12 10:40:40.298462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.583 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.298601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.298632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.298769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.298801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.298916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.298947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.299129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.299160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.299339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.299370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.299548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.299608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.299723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.299756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.299865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.299899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.300065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.300097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.300290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.300323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.300521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.300555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.300789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.300822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.301012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.301052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.301202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.301235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.301362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.301394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.301600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.301635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.301834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.301867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.301994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.302026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.302145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.302178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.302310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.302343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.302475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.302508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.302653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.302686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.302793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.302825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.302958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.302991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.303116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.303149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.303273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.303305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.303415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.303449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.303628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.303662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.303770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.303803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.303930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.303962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.304062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.304095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.304219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.304251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.304432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.304464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.304620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.304655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.304846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.304880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.305004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.305037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.305222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.305255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.305501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.305540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.305769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.584 [2024-12-12 10:40:40.305840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.584 qpair failed and we were unable to recover it. 00:27:06.584 [2024-12-12 10:40:40.305986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.306033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.306228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.306262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.306472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.306505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.306677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.306715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.306827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.306858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.307099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.307132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.309056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.309118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.309270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.309305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.309488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.309523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.309712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.309749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.309888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.309921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.310028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.310059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.310229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.310261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.310375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.310408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.310544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.310589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.310782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.310814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.310995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.311028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.311211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.311244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.311357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.311389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.311517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.311551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.311684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.311718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.311835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.311869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.311990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.312024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.312195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.312229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.312402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.312435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.312606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.312641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.312759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.312791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.312978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.313015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.313207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.313240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.313371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.313404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.313518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.313549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.313679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.313712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.313915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.313947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.314123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.314155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.314284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.314317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.314489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.314521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.314699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.314732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.314846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.314878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.315061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.585 [2024-12-12 10:40:40.315094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.585 qpair failed and we were unable to recover it. 00:27:06.585 [2024-12-12 10:40:40.315208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.315241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.315345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.315378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.315497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.315531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.315712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.315745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.316032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.316064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.316181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.316213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.316324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.316356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.316536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.316568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.316871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.316904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.317024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.317054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.317229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.317261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.317385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.317418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.317532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.317565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.317817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.317850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.318109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.318142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.318256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.318293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.318412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.318446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.318590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.318623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.318816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.318849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.318951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.318984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.319101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.319131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.319313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.319346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.319448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.319481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.319607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.319641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.319826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.319859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.319963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.319996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.320116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.320147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.320391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.320422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.320536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.320584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.320702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.320736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.320908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.320940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.321114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.321146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.321246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.321279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.321491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.321522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.321768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.321801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.321933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.321966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.322085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.322118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.322317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.322350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.322476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.322509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.586 [2024-12-12 10:40:40.322628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.586 [2024-12-12 10:40:40.322662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.586 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.322792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.322824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.323031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.323063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.323195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.323233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.323426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.323458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.323629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.323663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.323781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.323814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.323932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.323964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.324092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.324125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.324304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.324336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.324521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.324554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.324673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.324706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.324886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.324918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.325113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.325145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.325256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.325289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.325476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.325508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.325630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.325665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.325779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.325812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.325922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.325954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.326132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.326164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.326355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.326387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.326507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.326539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.326707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.326785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.326926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.326967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.327102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.327136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.327375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.327409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.327541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.327585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.327767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.327798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.328001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.328035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.328204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.328237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.328342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.328380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.328530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.328563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.328678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.328709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.328812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.328845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.328967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.328999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.329111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.587 [2024-12-12 10:40:40.329143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.587 qpair failed and we were unable to recover it. 00:27:06.587 [2024-12-12 10:40:40.329313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.329346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.329521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.329553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.329684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.329719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.329842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.329878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.329989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.330021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.330306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.330338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.330445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.330478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.330591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.330624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.330817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.330850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.330990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.331023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.331147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.331179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.331282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.331315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.331432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.331465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.331590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.331623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.331741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.331774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.331889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.331922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.332093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.332125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.332257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.332290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.332396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.332429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.332537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.332580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.332792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.332824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.333003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.333037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.333276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.333308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.333429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.333463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.333582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.333615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.333752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.333785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.333958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.333992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.334124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.334156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.334342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.334375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.334548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.334590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.334763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.334798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.334983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.335016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.335137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.335170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.335405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.335438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.335607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.335648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.335824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.335858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.336032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.336066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.336240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.336272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.336452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.588 [2024-12-12 10:40:40.336485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.588 qpair failed and we were unable to recover it. 00:27:06.588 [2024-12-12 10:40:40.336661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.336694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.336876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.336908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.337014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.337047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.337211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.337242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.337338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.337371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.337542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.337586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.337700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.337732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.337847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.337880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.338061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.338094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.338217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.338251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.338358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.338391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.338500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.338533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.338716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.338749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.338929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.338962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.339146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.339178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.339289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.339322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.339513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.339546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.339772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.339806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.339915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.339949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.340087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.340120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.340228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.340260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.340436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.340468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.340658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.340695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.340827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.340860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.341100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.341132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.341255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.341287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.341395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.341428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.341545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.341599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.341721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.341754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.341868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.341901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.342018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.342050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.342188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.342220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.342344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.342376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.342506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.342539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.342670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.342707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.342844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.342876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.342992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.343025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.343135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.343167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.343373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.343405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.589 [2024-12-12 10:40:40.343584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.589 [2024-12-12 10:40:40.343618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.589 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.343808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.343840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.343962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.343994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.344147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.344179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.344351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.344384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.344501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.344534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.344655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.344690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.344813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.344846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.345057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.345089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.345211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.345243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.345355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.345387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.345510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.345542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.345677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.345711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.345903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.345937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.346131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.346164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.346304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.346336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.346506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.346538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.346685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.346733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.346949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.346983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.347090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.347124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.347252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.347292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.347405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.347438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.347590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.347625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.347887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.347930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.348043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.348076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.348254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.348287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.348399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.348432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.348547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.348593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.348709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.348742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.348941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.348974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.349173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.349206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.349317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.349349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.349544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.349590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.349769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.349801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.349916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.349948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.350124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.350156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.350362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.350393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.350602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.350638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.350813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.350847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.350961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.590 [2024-12-12 10:40:40.350993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.590 qpair failed and we were unable to recover it. 00:27:06.590 [2024-12-12 10:40:40.351185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.351219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.351344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.351377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.351495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.351528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.351748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.351783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.352045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.352079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.352201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.352233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.352472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.352506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.352632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.352666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.352861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.352894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.353070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.353103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.353370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.353409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.353552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.353596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.353785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.353818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.354007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.354044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.354251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.354286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.354413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.354460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.354673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.354706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.354829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.354862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.355045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.355077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.355280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.355315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.355419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.355451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.355564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.355612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.355722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.355754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.355880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.355918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.356041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.356072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.356184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.356216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.356439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.356473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.356607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.356642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.356780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.356814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.357009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.357041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.357146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.357178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.357445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.357478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.357604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.357640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.357888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.357922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.358043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.591 [2024-12-12 10:40:40.358075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.591 qpair failed and we were unable to recover it. 00:27:06.591 [2024-12-12 10:40:40.358189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.358221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.358442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.358475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.358597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.358635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.358752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.358785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.358907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.358940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.359115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.359149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.359272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.359305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.359502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.359536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.359665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.359699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.359879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.359913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.360036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.360070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.360181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.360213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.360419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.360454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.360642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.360677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.360800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.360840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.361058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.361117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.361282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.361317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.361446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.361478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.361684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.361720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.361848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.361881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.362064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.362097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.362205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.362249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.362364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.362398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.362471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.592 [2024-12-12 10:40:40.362506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.362537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.362674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.362708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.362970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.363002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.363241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.363275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.363432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.363469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.363615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.363656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.363834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.363867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.363983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.364015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.364132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.364164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.364269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.364301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.364415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.364447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.364629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.364665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.364790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.364822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.364992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.365025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.365147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.365180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.365352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.365384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.592 qpair failed and we were unable to recover it. 00:27:06.592 [2024-12-12 10:40:40.365493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.592 [2024-12-12 10:40:40.365524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.365726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.365761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.365949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.365981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.366095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.366127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.366243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.366276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.366398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.366430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.366544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.366585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.366699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.366732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.366851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.366883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.367002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.367035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.367158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.367190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.367316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.367349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.367526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.367559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.367683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.367715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.367818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.367850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.367954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.367987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.368160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.368199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.368318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.368351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.368526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.368558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.368678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.368711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.368835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.368867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.368976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.369009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.369140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.369173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.369344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.369376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.369507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.369541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.369680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.369720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.369854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.369895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.370001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.370034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.370155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.370189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.370367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.370399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.370653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.370689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.370814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.370847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.371029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.371062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.371176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.371208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.371402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.371436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.371614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.371649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.371779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.371812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.372000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.372041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.372217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.372252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.372367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.593 [2024-12-12 10:40:40.372400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.593 qpair failed and we were unable to recover it. 00:27:06.593 [2024-12-12 10:40:40.372624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.372660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.372848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.372882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.373108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.373144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.373278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.373322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.373435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.373468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.373606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.373641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.373751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.373784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.373906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.373939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.374123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.374156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.374344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.374377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.374495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.374528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.374658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.374691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.374811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.374843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.374958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.374990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.375194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.375226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.375407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.375440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.375634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.375668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.375913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.375946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.376067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.376099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.376210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.376243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.376492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.376525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.376678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.376712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.376889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.376921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.377032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.377064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.377243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.377276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.377407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.377439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.377550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.377594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.377707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.377741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.377859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.377892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.378065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.378099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.378271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.378310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.378502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.378534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.378747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.378781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.378917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.378948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.379127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.379160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.379331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.379363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.379486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.379519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.379701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.379736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.379856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.379889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.594 qpair failed and we were unable to recover it. 00:27:06.594 [2024-12-12 10:40:40.380005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.594 [2024-12-12 10:40:40.380037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.380215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.380249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.380423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.380457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.380597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.380631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.380807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.380840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.380983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.381017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.381152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.381185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.381302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.381333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.381517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.381550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.381821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.381854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.382028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.382066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.382201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.382233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.382497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.382530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.382677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.382716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.382895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.382928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.383058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.383090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.383267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.383300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.383481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.383514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.383647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.383686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.383869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.383903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.384027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.384060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.384176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.384208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.384347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.384380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.384558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.384605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.384728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.384761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.384886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.384931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.385135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.385167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.385282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.385316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.385497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.385530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.385728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.385762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.385889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.385922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.386047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.386080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.386263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.386295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.386423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.386457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.386598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.386633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.386804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.386836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.386954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.386987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.387093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.387124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.387304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.595 [2024-12-12 10:40:40.387337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.595 qpair failed and we were unable to recover it. 00:27:06.595 [2024-12-12 10:40:40.387532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.387565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.387816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.387851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.388060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.388093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.388211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.388245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.388362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.388395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.388516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.388549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.388668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.388707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.388832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.388865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.388979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.389015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.389132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.389166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.389404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.389438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.389578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.389613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.389789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.389822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.390063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.390096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.390288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.390322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.390450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.390484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.390666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.390701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.390877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.390910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.391019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.391052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.391259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.391292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.391400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.391434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.391583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.391618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.391733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.391767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.391949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.391983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.392093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.392127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.392316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.392350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.392534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.392567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.392697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.392735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.392910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.392944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.393056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.393097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.393344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.393378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.393498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.393532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.393644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.393679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.393813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.393847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.393962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.393995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.596 [2024-12-12 10:40:40.394100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.596 [2024-12-12 10:40:40.394132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.596 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.394258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.394292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.394408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.394440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.394557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.394619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.394727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.394760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.394890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.394923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.395107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.395141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.395314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.395347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.395567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.395626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.395745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.395779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.395974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.396007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.396127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.396160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.396383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.396417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.396531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.396564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.396745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.396779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.396956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.396989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.397102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.397135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.397264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.397296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.397547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.397592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.397767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.397800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.397915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.397949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.398136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.398170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.398295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.398327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.398454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.398487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.398662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.398697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.398877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.398911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.399044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.399078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.399196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.399229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.399402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.399434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.399552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.399595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.399801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.399835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.399940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.399973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.400151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.400185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.400304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.400336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.400515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.400548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.400674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.400709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.400897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.400936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.401050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.401084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.401195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.401229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.401468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.401507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.597 [2024-12-12 10:40:40.401630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.597 [2024-12-12 10:40:40.401664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.597 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.401787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.401820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.401931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.401972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.402095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.402130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.402246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.402286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.402468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.402504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.402621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.402657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.402784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.402818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.403061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.403096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.403230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.403264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.403450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.403484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.403628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.403664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.403716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.598 [2024-12-12 10:40:40.403750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.598 [2024-12-12 10:40:40.403762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.598 [2024-12-12 10:40:40.403768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.598 [2024-12-12 10:40:40.403773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.598 [2024-12-12 10:40:40.403779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.403811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.404003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.404035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.404236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.404268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.404387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.404421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.404538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.404581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.404698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.404731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.404910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.404942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.405060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.405093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.405219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.405252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.405366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.405399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.598 [2024-12-12 10:40:40.405304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.405413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:27:06.598 [2024-12-12 10:40:40.405523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:27:06.598 [2024-12-12 10:40:40.405522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:27:06.598 [2024-12-12 10:40:40.405645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.405696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.405834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.405881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.406068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.406117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.406244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.406278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.406397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.406430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.406669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.406705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.406834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.406867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.407051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.407085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.407190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.407224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.407407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.407439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.407642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.407676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.407869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.407901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.408076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.408111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.408225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.408260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.408451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.598 [2024-12-12 10:40:40.408494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.598 qpair failed and we were unable to recover it. 00:27:06.598 [2024-12-12 10:40:40.408613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.408648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.408766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.408799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.408986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.409020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.409147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.409180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.409369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.409402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.409588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.409623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.409875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.409910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.410016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.410049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.410194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.410228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.410343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.410387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.410507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.410541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.410811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.410847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.411114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.411147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.411397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.411432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.411588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.411627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.411756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.411789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.411926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.411959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.412083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.412117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.412378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.412411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.412524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.412560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.412788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.412825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.412948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.412982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.413101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.413135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.413263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.413296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.413427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.413460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.413634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.413670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.413819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.413866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.413990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.414031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.414146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.414185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.414292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.414325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.414428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.414462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.414583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.414619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.414732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.414765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.414884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.414917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.415092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.415126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.415315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.415349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.415543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.415608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.415791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.415825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.415950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.415984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.599 qpair failed and we were unable to recover it. 00:27:06.599 [2024-12-12 10:40:40.416164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.599 [2024-12-12 10:40:40.416198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.416311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.416345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.416457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.416491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.416607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.416642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.416851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.416885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.416999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.417033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.417216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.417251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.417362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.417396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.417581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.417616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.417723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.417757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.417878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.417913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.418097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.418131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.418253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.418287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.418477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.418511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.418636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.418676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.418855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.418889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.419030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.419064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.419191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.419225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.419419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.419453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.419558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.419600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.419791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.419824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.420012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.420046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.420229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.420263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.420433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.420467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.420591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.420626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.420743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.420777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.421021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.421055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.421172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.421206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.421400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.421434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.421538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.421578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.421693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.421726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.421898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.421933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.422060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.422093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.422273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.422308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.422427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.422461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.422661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.600 [2024-12-12 10:40:40.422697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.600 qpair failed and we were unable to recover it. 00:27:06.600 [2024-12-12 10:40:40.422937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.422971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.423158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.423191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.423362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.423397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.423512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.423546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.423738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.423779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.423976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.424032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.424236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.424271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.424444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.424477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.424606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.424644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.424762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.424796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.424905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.424938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.425112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.425148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.425256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.425289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.425408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.425442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.425634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.425671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.425778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.425811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.425996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.426030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.426157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.426191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.426316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.426349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.426471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.426505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.426686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.426722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.426916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.426949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.427063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.427098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.427272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.427306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.427544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.427588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.427769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.427804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.427950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.427984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.428163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.428197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.428390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.428427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.428548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.428595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.428707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.428740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.428855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.428889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.429008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.429051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.429156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.429192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.429311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.429344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.429521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.429556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.429758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.429795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.429985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.430020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.430138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.430173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.601 [2024-12-12 10:40:40.430275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.601 [2024-12-12 10:40:40.430307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.601 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.430483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.430518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.430707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.430743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.430862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.430898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.431017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.431051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.431173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.431207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.431328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.431363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.431500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.431536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.431727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.431763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.431877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.431911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.432020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.432054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.432239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.432275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.432404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.432439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.432551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.432598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.432780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.432814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.432922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.432957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.433071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.433107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.433216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.433249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.433373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.433407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.433514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.433548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.433681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.433718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.433845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.433878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.434058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.434092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.434209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.434243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.434358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.434392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.434537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.434579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.434768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.434802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.434914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.434947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.435131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.435165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.435354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.435388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.435514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.435549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.435667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.435702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.435809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.435842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.435948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.435981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.436152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.436216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.436354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.436405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.436540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.436586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.436717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.436758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.436970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.437009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.437127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.437162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.602 qpair failed and we were unable to recover it. 00:27:06.602 [2024-12-12 10:40:40.437371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.602 [2024-12-12 10:40:40.437405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.437514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.437548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.437819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.437854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.437972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.438006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.438148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.438182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.438292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.438325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.438457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.438491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.438730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.438782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.438964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.438998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.439125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.439158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.439275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.439309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.439485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.439518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.439638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.439673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.439796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.439830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.439941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.439973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.440108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.440141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.440247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.440279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.440458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.440490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.440660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.440694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.440795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.440829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.441001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.441034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.441216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.441250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.441369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.441403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.441510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.441544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.441746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.441780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.441898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.441931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.442111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.442145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.442271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.442304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.442417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.442449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.442643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.442679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.442794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.442828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.443013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.443046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.443227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.443262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.443463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.443497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.443647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.443692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.443821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.443854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.444034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.444067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.444201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.444234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.444363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.444395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.603 [2024-12-12 10:40:40.444516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.603 [2024-12-12 10:40:40.444554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.603 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.444712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.444749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.444883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.444915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.445115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.445149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.445262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.445295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.445408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.445442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.445623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.445658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.445761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.445794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.445908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.445941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.446074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.446108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.446218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.446258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.446394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.446426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.446604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.446640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.446813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.446846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.447025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.447059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.447183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.447218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.447335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.447368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.447477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.447513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.447638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.447673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.447949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.447983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.448093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.448127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.448366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.448399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.448598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.448634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.448755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.448789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.448923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.448956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.449156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.449190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.449304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.449337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.449522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.449555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.449690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.449724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.449920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.449953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.450177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.450213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.450337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.450372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.450480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.450513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.450653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.450688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.450860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.450894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.451083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.451122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.451304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.451339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.451474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.451507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.451715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.451750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.604 [2024-12-12 10:40:40.451940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.604 [2024-12-12 10:40:40.451974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.604 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.452089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.452123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.452266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.452301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.452474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.452510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.452717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.452752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.452951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.452985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.453180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.453214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.453402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.453436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.453588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.453624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.453741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.453782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.453899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.453933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.454115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.454150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.454422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.454458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.454592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.454627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.454889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.454923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.455132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.455168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.455293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.455327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.455440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.455473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.455587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.455621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.455834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.455869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.455983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.456018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.456287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.456323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.456464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.456497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.456693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.456732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.456908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.456943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.457063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.457098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.457290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.457325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.457468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.457503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.457627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.457662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.457857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.457891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.458098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.458132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.458269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.458302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.458542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.458587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.458734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.458769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.458899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.458932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.605 [2024-12-12 10:40:40.459065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.605 [2024-12-12 10:40:40.459099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.605 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.459375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.459416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.459603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.459639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.459766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.459799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.460043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.460075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.460204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.460237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.460414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.460450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.460639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.460674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.460848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.460886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.461106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.461141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.461264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.461299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.461427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.461460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.461608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.461645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.461750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.461784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.462025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.462061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.462186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.462220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.462411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.462444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.462629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.462665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.462789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.462824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.463008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.463043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.463162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.463196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.463375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.463408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.463528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.463561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.463808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.463842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.463964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.463997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.464191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.464224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.464343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.464376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.464505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.464539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.464675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.464709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.464883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.464916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.465047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.465080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.465265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.465298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.465405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.465438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.465628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.465663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.465835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.465868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.465971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.466004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.466112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.466145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.466354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.466388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.466506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.466539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.606 [2024-12-12 10:40:40.466695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.606 [2024-12-12 10:40:40.466755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.606 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.467013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.467047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.467290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.467333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.467455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.467488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.467678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.467714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.467840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.467874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.467998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.468031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.468216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.468249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.468431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.468464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.468644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.468679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.468793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.468826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.468938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.468971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.469080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.469113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.469300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.469334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.469510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.469543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.469680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.469715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.469856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.469889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.470061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.470095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.470270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.470302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.470474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.470507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.470621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.470656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.470831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.470864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.470983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.471016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.471146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.471179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.471298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.471331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.471434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.471467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.471639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.471674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.471794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.471828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.471943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.471977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.472221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.472255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.472444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.472477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.472594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.472627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.472803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.472836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.472951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.472984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.473270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.473303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.473588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.473622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.473752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.473785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.473898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.473931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.474056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.474089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.607 [2024-12-12 10:40:40.474201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.607 [2024-12-12 10:40:40.474235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.607 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.474345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.474378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.474564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.474606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.474741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.474780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.474898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.474933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.475060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.475094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.475267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.475301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.475405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.475439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.475617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.475652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.475778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.475811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.476076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.476108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.476277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.476310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.476429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.476462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.476653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.476688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.476889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.476922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.477115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.477148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.477323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.477356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.477479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.477512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.477643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.477677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.477807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.477840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.477943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.477976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.478101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.478134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.478318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.478351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.478589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.478623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.478863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.478896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.479018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.479051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.479153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.479186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.479383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.479416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.479537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.479593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.479708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.479742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.479994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.480027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.480160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.480193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.480372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.480405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.480592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.480627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.480843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.480877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.480995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.481027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.481269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.481301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.481433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.481467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.481584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.481618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.608 [2024-12-12 10:40:40.481734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.608 [2024-12-12 10:40:40.481767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.608 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.481886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.481920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.482039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.482072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.482196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.482228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.482353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.482398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.482518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.482552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.482758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.482791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.482959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.482992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.483119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.483152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.483339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.483372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.483600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.483635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.483739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.483773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.483905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.483937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.484191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.484225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.484341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.484374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.484498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.484531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.484767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.484803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.484976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.485009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.485134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.485168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.485346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.485379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.485560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.485606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.485767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.485808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.486079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.486111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.486235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.486268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.486374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.486407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.486629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.486663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.486787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.486820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.487015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.487048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.487293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.487325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.487433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.487467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.487652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.487686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.487827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.487861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.488116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.488150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.488331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.488363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.488480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.488514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.609 [2024-12-12 10:40:40.488733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.609 [2024-12-12 10:40:40.488767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.609 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.488900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.488933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.489103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.489137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.489309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.489342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.489468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.489501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.489694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.489729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.489851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.489884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.490020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.490053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.490186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.490219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.490406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.490446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.490579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.490614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.490793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.490826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.490958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.490991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.491109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.491142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.491313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.491347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.491477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.491510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.491711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.491746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.491882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.491915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.492160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.492193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.492366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.492399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.492588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.492623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.492808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.492841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.493080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.493114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.493316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.493350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.493463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.493496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.493695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.493730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.493869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.493903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.494026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.494059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.494179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.494212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.494324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.494358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.494529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.494561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.494762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.494795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.494928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.494961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.495094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.495127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.495234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.495267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.495475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.495508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.495697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.495732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.495847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.495880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.496055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.496088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.610 [2024-12-12 10:40:40.496194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.610 [2024-12-12 10:40:40.496228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.610 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.496332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.496365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.496475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.496508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.496644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.496678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.496801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.496834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.496945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.496978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.497155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.497188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.497366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.497399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.497580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.497615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.497741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.497774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.498032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.498070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.498192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.498225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.498417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.498451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.498643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.498677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.498861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.498895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.499013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.499047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.499218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.499251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.499523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.499557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.499671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.499706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.499889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.499921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.500026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.500059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.500230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.500265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.500400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.500433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.500538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.500578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.500721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.500755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.500870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.500904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.501018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.501051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.501158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.501190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.501309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.501342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.501452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.501485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.501605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.501638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.501830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.501863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.502055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.502088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.502204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.502237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.502377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.502410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.502567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.502612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.502872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.502906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.503065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.503103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.503231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.503264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.503439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.611 [2024-12-12 10:40:40.503472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.611 qpair failed and we were unable to recover it. 00:27:06.611 [2024-12-12 10:40:40.503643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.503678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.503806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.503839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.503939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.503972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.504143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.504176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.504306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.504339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.504458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.504491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.504600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.504634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.504752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.504786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.504957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.504989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.505093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.505125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.505238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.505278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.505403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.505434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.505553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.505597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.505879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.505912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.506040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.506072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.506197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.506230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.506407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.506440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.506556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.506598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.506722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.506755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.506942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.506975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.507150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.507182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.507361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.507394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.507588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.507623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.507755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.507787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.507912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.507944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.508127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.508161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.508334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.508366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.508479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.508511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.508626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.508659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.508926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.508960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.509088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.509120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.509261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.509298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.509444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.509477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.509654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.509689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.612 [2024-12-12 10:40:40.509928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.509961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:06.612 [2024-12-12 10:40:40.510201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.510234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 [2024-12-12 10:40:40.510406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.510446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.612 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.612 [2024-12-12 10:40:40.510560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.612 [2024-12-12 10:40:40.510603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.612 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.510788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.613 [2024-12-12 10:40:40.510822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.511037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.511071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.613 [2024-12-12 10:40:40.511247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.511280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.511491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.511524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.511656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.511690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.511796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.511828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.512017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.512049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.512253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.512285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.512423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.512456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.512581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.512614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.512854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.512893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.513023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.513056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.513227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.513260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.513372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.513405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.513609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.513644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.513765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.513797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.513926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.513960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.514093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.514126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.514259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.514292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.514403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.514434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.514552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.514597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.514769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.514804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.514981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.515016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.515156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.515191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.515308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.515342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.515466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.515499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.515651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.515685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.515873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.515905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.516105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.516138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.516243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.516275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.516383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.516417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.516592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.516626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.516739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.516772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.517015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.517049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.613 [2024-12-12 10:40:40.517175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.613 [2024-12-12 10:40:40.517207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.613 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.517395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.517427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.517635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.517672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.517819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.517851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.517960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.517998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.518107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.518140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.518263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.518295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.518404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.518455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.518603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.518638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.518763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.518801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.518973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.519009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.519151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.519186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.519362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.519395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.519514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.519547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.519696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.519730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.519912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.519945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.520057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.520096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.520211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.520243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.520413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.520446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.520582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.520616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.520737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.520769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.520896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.520929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.521052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.521085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.521215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.521248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.521361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.521394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.521598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.521634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.521747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.521782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.521897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.521933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.522086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.522118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.522250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.522289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.522487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.522519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.522631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.522664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.522778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.522811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.523001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.523034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.523154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.523187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.523312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.523344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.523460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.523493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.523636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.523671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.523793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.523825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.614 [2024-12-12 10:40:40.523949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.614 [2024-12-12 10:40:40.523983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.614 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.524107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.524138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.524250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.524283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.524410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.524442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb838000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.524605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.524658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.524784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.524819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.524927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.524961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.525100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.525142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.525261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.525295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.525396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.525430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.525534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.525566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.525696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.525728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.525848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.525881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.525995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.526028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.526137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.526177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.526376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.526409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.526524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.526556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.526708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.526750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.526856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.526888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.526992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.527024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.527219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.527258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.527388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.527425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.527546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.527592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.527707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.527740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.527912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.527944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.528078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.528110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.528215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.528248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.528368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.528400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.528532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.528564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.528688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.528720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.528828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.528861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.528988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.529020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.529151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.529184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.529299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.529330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.529445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.529481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.529613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.529650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.529759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.529791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.529925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.529961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.530075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.615 [2024-12-12 10:40:40.530108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.615 qpair failed and we were unable to recover it. 00:27:06.615 [2024-12-12 10:40:40.530284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.530315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.530420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.530452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.530588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.530633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.530752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.530784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.530893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.530925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.531079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.531131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.531257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.531292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.531417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.531451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.531636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.531672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.531787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.531820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.531950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.531984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.532087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.532119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.532314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.532346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.532469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.532503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.532638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.532675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.532800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.532833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.532966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.532998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.533105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.533139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.533274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.533307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.533443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.533476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.533604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.533639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.533756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.533788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.533895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.533927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.534041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.534074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.534195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.534228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.534341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.534374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.534485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.534519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.534644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.534677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.534798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.534832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.534949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.534982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.535100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.535133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.535236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.535269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.535454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.535496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.535614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.535650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.535761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.535794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.535907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.535941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.536059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.536092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.536266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.536299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.536408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.536442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.616 qpair failed and we were unable to recover it. 00:27:06.616 [2024-12-12 10:40:40.536563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.616 [2024-12-12 10:40:40.536607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.536780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.536813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.536928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.536961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.537070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.537102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.537234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.537271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.537385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.537419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.537525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.537558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.537728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.537762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.537931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.537964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.538086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.538118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.538229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.538261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.538389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.538424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.538547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.538593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.538707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.538740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.538935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.538968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.539079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.539112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.539288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.539320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.539440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.539476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.539650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.539686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.539825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.539857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.539973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.540006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.540116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.540148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.540272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.540304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.540504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.540541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.540736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.540771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.540887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.540919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.541112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.541145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.541324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.541356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.541473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.541506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.541634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.541676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.541847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.541879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.541998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.542029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.542138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.542171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.542297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.542335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.542463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.542497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.542637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.542681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.542807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.542841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.542949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.542981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.543154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.543187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.617 [2024-12-12 10:40:40.543370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.617 [2024-12-12 10:40:40.543403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.617 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.543519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.543551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.543688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.543728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.543855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.543891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.544092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.544126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.544301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.544334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.544438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.544469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.544592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.544627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.544806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.544849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.544996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.545028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.545169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.545203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.545317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.545349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.545536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.545585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.545774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.545807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.545914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.545959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.618 [2024-12-12 10:40:40.546155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.546191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.546315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.546347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.546456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.546491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:06.618 [2024-12-12 10:40:40.546681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.546716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.546834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.546867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b9 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.618 0 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.546997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.547032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.618 [2024-12-12 10:40:40.547175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.547212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.547410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.547445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.547556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.547600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.547717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.547747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.547861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.547891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.548020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.548052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.548170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.548210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.548329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.548362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.548485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.548517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.548661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.548696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.548806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.548839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.548948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.548994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.549113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.618 [2024-12-12 10:40:40.549146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.618 qpair failed and we were unable to recover it. 00:27:06.618 [2024-12-12 10:40:40.549277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.549324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.549442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.549474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.549656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.549691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.549816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.549848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.550028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.550061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.550171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.550203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.550379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.550426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.550556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.550617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.550728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.550761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.550935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.550969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.551076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.551107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.551221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.551254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.551401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.551434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.551556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.551607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.551725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.551758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.551931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.551964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.552091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.552124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.552243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.552276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.552456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.552489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.552625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.552668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.552780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.552813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.552990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.553023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.553156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.553189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.553303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.553337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.553444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.553477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.553601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.553637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.553816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.553854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.553976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.554009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.554115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.554148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.554258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.554291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.554466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.554498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.554685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.554721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.554844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.554879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.554997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.555030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.555143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.555176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.555297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.555330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.555458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.555492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.555703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.555738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.555854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.619 [2024-12-12 10:40:40.555896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.619 qpair failed and we were unable to recover it. 00:27:06.619 [2024-12-12 10:40:40.556081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.556114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.556219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.556249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.556416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.556446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.556546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.556584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.556694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.556724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.556830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.556859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.556967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.556998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.557117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.557151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.557328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.557358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.557530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.557559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.557699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.557732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.557830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.557859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.558025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.558054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.558173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.558206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.558314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.558344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.558528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.558559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.558683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.558714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.558839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.558869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.559055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.559085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.559188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.559229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.559351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.559382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.559558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.559600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.559761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.559791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.559892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.559921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.560016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.560047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.560156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.560186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.560364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.560404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.560521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.560551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.560751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.560783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.560949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.560978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.561144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.561173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.561282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.561312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.561478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.561511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.561627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.561659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.561773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.561802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.562080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.562111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.562217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.562253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.562418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.562447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.562608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.620 [2024-12-12 10:40:40.562644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.620 qpair failed and we were unable to recover it. 00:27:06.620 [2024-12-12 10:40:40.562779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.562817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.562939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.562969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.563076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.563106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.563230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.563260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.563431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.563461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.563699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.563734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.563857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.563888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.564013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.564043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.564153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.564184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.564360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.564391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.564507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.564537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.564721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.564761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.564872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.564902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.565011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.565041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.565211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.565240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.565414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.565444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.565550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.565592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.565714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.565744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.565859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.565893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.566000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.566031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.566151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.566181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.566379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.566408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.566519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.566549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.566744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.566776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.566897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.566937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.567078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.567109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.567306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.567336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.567454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.567485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.567594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.567626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.567728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.567758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.567948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.567977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.568090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.568122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.568247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.568277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.568457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.568487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.568625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.568657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.568760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.568790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.568895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.568924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.569043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.569073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.569201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.569236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.569400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.569431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.569627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.621 [2024-12-12 10:40:40.569664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.621 qpair failed and we were unable to recover it. 00:27:06.621 [2024-12-12 10:40:40.569787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.569817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.569986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.570016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.570200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.570240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.570422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.570455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.570633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.570665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.570772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.570802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.570907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.570937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.571070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.571100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.571213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.571243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.571354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.571387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.571620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.571654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.571756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.571786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.571899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.571929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.572102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.572132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.572318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.572350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.572541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.572588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.572713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.572744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.572916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.572946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.573053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.573084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.573276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.573307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.573421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.573452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.573587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.573623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.573807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.573838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.574050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.574080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.574264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.574295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.574534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.574566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb82c000b90 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.622 [2024-12-12 10:40:40.574808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.622 [2024-12-12 10:40:40.574858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.622 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.575042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.575076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.575280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.575313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.575449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.575482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.575603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.575639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.575758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.575793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.575904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.575937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.576113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.576146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.576338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.576371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.576559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.576605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.576727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.576760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.576894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.576927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.577185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.577220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.577401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.577434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 Malloc0 00:27:06.885 [2024-12-12 10:40:40.577557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.577601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.577729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.577763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.577898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.577931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.578136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.578170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.885 [2024-12-12 10:40:40.578344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.578378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.578580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.578614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:06.885 [2024-12-12 10:40:40.578797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.578831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.578962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.578996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.579118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.579151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.885 [2024-12-12 10:40:40.579346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.579380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.579581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.579615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.579796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.579835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.885 [2024-12-12 10:40:40.580142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.885 [2024-12-12 10:40:40.580175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.885 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.580346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.580379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.580552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.580597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.580798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.580831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.580963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.580997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.581238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.581270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.581385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.581419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.581603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.581639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.581759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.581792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.581922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.581955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.582127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.582160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.582334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.582367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.582484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.582517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.582645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.582678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.582854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.582888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.583013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.583046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.583282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.583315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.583436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.583469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.583579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.583612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.583800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.583833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.584005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.584039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.584159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.584193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.584377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.584410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.584604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.584638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.584760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.584793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.584907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.584939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.585045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.585050] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.886 [2024-12-12 10:40:40.585085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.585199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.585230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.585400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.585433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.585586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.585620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.585730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.585764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.585936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.585968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.586140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.586173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.586354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.586387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.586520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.586553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.586694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.586728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.586829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.586862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.587046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.587078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.587267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.587299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.886 qpair failed and we were unable to recover it. 00:27:06.886 [2024-12-12 10:40:40.587472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.886 [2024-12-12 10:40:40.587505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.587627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.587662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.587778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.587811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.588010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.588043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.588214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.588247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.588421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.588454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.588630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.588664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.588769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.588803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.588922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.588955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.589071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.589104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.589219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.589252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.589370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.589404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.589622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.589657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.589842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.589876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.589998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.590037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.590228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.590262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.590434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.590468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.887 [2024-12-12 10:40:40.590585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.590620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.590757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.590790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.887 [2024-12-12 10:40:40.590966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.590998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.591125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.591159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.887 [2024-12-12 10:40:40.591423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.591457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.887 [2024-12-12 10:40:40.591578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.591614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.591798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.591831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.592023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.592056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.592383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.592417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.592641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.592677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.592785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.592818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.593079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.593111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.593288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.593320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.593434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.593467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.593601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.593635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.593758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.593790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.593914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.593947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.594078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.887 [2024-12-12 10:40:40.594111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.887 qpair failed and we were unable to recover it. 00:27:06.887 [2024-12-12 10:40:40.594236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.594269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.594443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.594477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.594616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.594651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.594843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.594876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.595057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.595095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.595278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.595312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.595552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.595607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.595784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.595817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.595995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.596029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.596145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.596178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.596304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.596337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.596618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.596653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.596882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.596916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.597042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.597074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.597260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.597293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.597474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.597508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.597714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.597749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.597935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.597969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b1a0 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.598164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.598211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.598338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.598372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.598492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.888 [2024-12-12 10:40:40.598525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.598736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.598770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.598901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.888 [2024-12-12 10:40:40.598935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.599052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.599086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.888 [2024-12-12 10:40:40.599269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.599302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.599499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.888 [2024-12-12 10:40:40.599532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.599741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.599776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.599978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.600011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.600191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.600223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.600447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.600481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.600617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.600651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.600769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.600802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.600999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.601032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.601140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.601173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.601435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.601468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.601656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.601691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.888 [2024-12-12 10:40:40.601822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.888 [2024-12-12 10:40:40.601855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.888 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.602045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.602079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.602264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.602296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.602482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.602515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.602641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.602676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.602786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.602819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.602958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.602991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.603191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.603224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.603397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.603431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.603546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.603588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.603770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.603804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.603921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.603954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.604075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.604108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.604218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.604252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.604514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.604547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.604845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.604879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.605001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.605034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.605203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.605236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.605402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.605436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.605619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.605654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.605835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.605873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.606000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.606034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.606159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.606192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.606464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.606498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.889 [2024-12-12 10:40:40.606619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.606654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.606783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.606816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.889 [2024-12-12 10:40:40.606998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.607031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 [2024-12-12 10:40:40.607135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.607167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.889 qpair failed and we were unable to recover it. 00:27:06.889 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.889 [2024-12-12 10:40:40.607407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.889 [2024-12-12 10:40:40.607440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.607560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.890 [2024-12-12 10:40:40.607603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.607712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.607746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.607927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.607960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.608152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.608185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.608293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.608327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.608549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.608592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.608705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.608738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.608856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.608890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.609010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.609043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.609168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.609201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.609319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.609352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.609489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.609522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.609775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.890 [2024-12-12 10:40:40.609809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb830000b90 with addr=10.0.0.2, port=4420 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.610015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.890 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.890 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:06.890 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.890 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.890 [2024-12-12 10:40:40.615705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.890 [2024-12-12 10:40:40.615833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.890 [2024-12-12 10:40:40.615892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.890 [2024-12-12 10:40:40.615917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.890 [2024-12-12 10:40:40.615939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.890 [2024-12-12 10:40:40.615991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.890 10:40:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1674427 00:27:06.890 [2024-12-12 10:40:40.625676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.890 [2024-12-12 10:40:40.625762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.890 [2024-12-12 10:40:40.625794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.890 [2024-12-12 10:40:40.625812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.890 [2024-12-12 10:40:40.625827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.890 [2024-12-12 10:40:40.625863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.635636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.890 [2024-12-12 10:40:40.635710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.890 [2024-12-12 10:40:40.635731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.890 [2024-12-12 10:40:40.635742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.890 [2024-12-12 10:40:40.635752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.890 [2024-12-12 10:40:40.635775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.645560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.890 [2024-12-12 10:40:40.645630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.890 [2024-12-12 10:40:40.645645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.890 [2024-12-12 10:40:40.645652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.890 [2024-12-12 10:40:40.645660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.890 [2024-12-12 10:40:40.645676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.655601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.890 [2024-12-12 10:40:40.655659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.890 [2024-12-12 10:40:40.655676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.890 [2024-12-12 10:40:40.655683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.890 [2024-12-12 10:40:40.655690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.890 [2024-12-12 10:40:40.655705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.665547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.890 [2024-12-12 10:40:40.665646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.890 [2024-12-12 10:40:40.665659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.890 [2024-12-12 10:40:40.665667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.890 [2024-12-12 10:40:40.665673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.890 [2024-12-12 10:40:40.665688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.890 qpair failed and we were unable to recover it. 00:27:06.890 [2024-12-12 10:40:40.675603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.890 [2024-12-12 10:40:40.675659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.675672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.675678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.675685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.675700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.685719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.685797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.685810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.685818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.685824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.685838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.695676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.695736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.695750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.695757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.695767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.695782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.705693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.705760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.705775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.705783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.705789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.705805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.715721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.715777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.715792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.715800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.715809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.715826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.725784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.725869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.725882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.725890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.725896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.725911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.735829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.735890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.735903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.735911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.735917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.735932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.745846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.745899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.745913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.745919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.745926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.745941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.755943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.756001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.756014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.756022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.756028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.756043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.765904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.765980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.766010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.766018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.766024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.766044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.775916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.775977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.775992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.776000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.776006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.776021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.785979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.786031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.786049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.786057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.786063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.786079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.796031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.796080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.796095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.796103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.796109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.891 [2024-12-12 10:40:40.796124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.891 qpair failed and we were unable to recover it. 00:27:06.891 [2024-12-12 10:40:40.806010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.891 [2024-12-12 10:40:40.806065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.891 [2024-12-12 10:40:40.806079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.891 [2024-12-12 10:40:40.806087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.891 [2024-12-12 10:40:40.806093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.806108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-12-12 10:40:40.816037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.892 [2024-12-12 10:40:40.816096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.892 [2024-12-12 10:40:40.816111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.892 [2024-12-12 10:40:40.816119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.892 [2024-12-12 10:40:40.816125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.816140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-12-12 10:40:40.826124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.892 [2024-12-12 10:40:40.826189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.892 [2024-12-12 10:40:40.826203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.892 [2024-12-12 10:40:40.826213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.892 [2024-12-12 10:40:40.826220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.826235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-12-12 10:40:40.836120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.892 [2024-12-12 10:40:40.836183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.892 [2024-12-12 10:40:40.836197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.892 [2024-12-12 10:40:40.836205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.892 [2024-12-12 10:40:40.836211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.836226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-12-12 10:40:40.846146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.892 [2024-12-12 10:40:40.846203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.892 [2024-12-12 10:40:40.846216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.892 [2024-12-12 10:40:40.846224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.892 [2024-12-12 10:40:40.846230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.846245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-12-12 10:40:40.856196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.892 [2024-12-12 10:40:40.856264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.892 [2024-12-12 10:40:40.856278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.892 [2024-12-12 10:40:40.856285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.892 [2024-12-12 10:40:40.856291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.856306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-12-12 10:40:40.866212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.892 [2024-12-12 10:40:40.866265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.892 [2024-12-12 10:40:40.866278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.892 [2024-12-12 10:40:40.866285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.892 [2024-12-12 10:40:40.866291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.866310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-12-12 10:40:40.876242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.892 [2024-12-12 10:40:40.876297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.892 [2024-12-12 10:40:40.876311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.892 [2024-12-12 10:40:40.876318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.892 [2024-12-12 10:40:40.876324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.876338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-12-12 10:40:40.886279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.892 [2024-12-12 10:40:40.886343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.892 [2024-12-12 10:40:40.886357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.892 [2024-12-12 10:40:40.886364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.892 [2024-12-12 10:40:40.886370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.886385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:06.892 [2024-12-12 10:40:40.896308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.892 [2024-12-12 10:40:40.896365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.892 [2024-12-12 10:40:40.896379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.892 [2024-12-12 10:40:40.896386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.892 [2024-12-12 10:40:40.896392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:06.892 [2024-12-12 10:40:40.896407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.892 qpair failed and we were unable to recover it. 00:27:07.153 [2024-12-12 10:40:40.906321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.153 [2024-12-12 10:40:40.906375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.153 [2024-12-12 10:40:40.906388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.153 [2024-12-12 10:40:40.906395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.153 [2024-12-12 10:40:40.906401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.153 [2024-12-12 10:40:40.906416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.153 qpair failed and we were unable to recover it. 00:27:07.153 [2024-12-12 10:40:40.916355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.153 [2024-12-12 10:40:40.916409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.153 [2024-12-12 10:40:40.916425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.153 [2024-12-12 10:40:40.916432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.153 [2024-12-12 10:40:40.916438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.153 [2024-12-12 10:40:40.916453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.153 qpair failed and we were unable to recover it. 00:27:07.153 [2024-12-12 10:40:40.926394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.153 [2024-12-12 10:40:40.926452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.153 [2024-12-12 10:40:40.926464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.153 [2024-12-12 10:40:40.926472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.153 [2024-12-12 10:40:40.926478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.153 [2024-12-12 10:40:40.926492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.153 qpair failed and we were unable to recover it. 00:27:07.153 [2024-12-12 10:40:40.936414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.153 [2024-12-12 10:40:40.936470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.153 [2024-12-12 10:40:40.936483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.153 [2024-12-12 10:40:40.936490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.153 [2024-12-12 10:40:40.936497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.153 [2024-12-12 10:40:40.936511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.153 qpair failed and we were unable to recover it. 00:27:07.153 [2024-12-12 10:40:40.946366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.153 [2024-12-12 10:40:40.946427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.153 [2024-12-12 10:40:40.946441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.153 [2024-12-12 10:40:40.946449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.153 [2024-12-12 10:40:40.946455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.153 [2024-12-12 10:40:40.946469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.153 qpair failed and we were unable to recover it. 00:27:07.153 [2024-12-12 10:40:40.956394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.153 [2024-12-12 10:40:40.956459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.153 [2024-12-12 10:40:40.956472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.153 [2024-12-12 10:40:40.956482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.153 [2024-12-12 10:40:40.956488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.153 [2024-12-12 10:40:40.956502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.153 qpair failed and we were unable to recover it. 00:27:07.153 [2024-12-12 10:40:40.966489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:40.966547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:40.966560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:40.966567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:40.966578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:40.966593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:40.976528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:40.976600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:40.976614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:40.976621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:40.976628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:40.976643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:40.986475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:40.986527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:40.986540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:40.986547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:40.986553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:40.986572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:40.996559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:40.996616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:40.996630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:40.996638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:40.996645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:40.996665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.006609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.006667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:41.006680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:41.006688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:41.006694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:41.006710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.016647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.016702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:41.016717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:41.016725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:41.016731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:41.016747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.026665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.026736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:41.026750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:41.026757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:41.026763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:41.026778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.036691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.036744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:41.036757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:41.036764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:41.036770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:41.036785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.046746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.046803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:41.046817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:41.046823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:41.046830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:41.046844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.056759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.056818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:41.056831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:41.056839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:41.056845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:41.056860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.066783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.066841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:41.066854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:41.066861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:41.066868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:41.066883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.076823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.076878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:41.076891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:41.076899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:41.076906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:41.076921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.086835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.086902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.154 [2024-12-12 10:40:41.086918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.154 [2024-12-12 10:40:41.086925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.154 [2024-12-12 10:40:41.086931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.154 [2024-12-12 10:40:41.086946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.154 qpair failed and we were unable to recover it. 00:27:07.154 [2024-12-12 10:40:41.096889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.154 [2024-12-12 10:40:41.096944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.155 [2024-12-12 10:40:41.096957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.155 [2024-12-12 10:40:41.096964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.155 [2024-12-12 10:40:41.096971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.155 [2024-12-12 10:40:41.096986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.155 qpair failed and we were unable to recover it. 00:27:07.155 [2024-12-12 10:40:41.106910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.155 [2024-12-12 10:40:41.106959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.155 [2024-12-12 10:40:41.106972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.155 [2024-12-12 10:40:41.106979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.155 [2024-12-12 10:40:41.106985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.155 [2024-12-12 10:40:41.106999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.155 qpair failed and we were unable to recover it. 00:27:07.155 [2024-12-12 10:40:41.116926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.155 [2024-12-12 10:40:41.116980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.155 [2024-12-12 10:40:41.116994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.155 [2024-12-12 10:40:41.117001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.155 [2024-12-12 10:40:41.117007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.155 [2024-12-12 10:40:41.117022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.155 qpair failed and we were unable to recover it. 00:27:07.155 [2024-12-12 10:40:41.126943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.155 [2024-12-12 10:40:41.126999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.155 [2024-12-12 10:40:41.127013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.155 [2024-12-12 10:40:41.127020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.155 [2024-12-12 10:40:41.127029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.155 [2024-12-12 10:40:41.127044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.155 qpair failed and we were unable to recover it. 00:27:07.155 [2024-12-12 10:40:41.136983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.155 [2024-12-12 10:40:41.137040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.155 [2024-12-12 10:40:41.137053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.155 [2024-12-12 10:40:41.137060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.155 [2024-12-12 10:40:41.137067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.155 [2024-12-12 10:40:41.137082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.155 qpair failed and we were unable to recover it. 00:27:07.155 [2024-12-12 10:40:41.147021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.155 [2024-12-12 10:40:41.147069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.155 [2024-12-12 10:40:41.147082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.155 [2024-12-12 10:40:41.147090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.155 [2024-12-12 10:40:41.147097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.155 [2024-12-12 10:40:41.147111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.155 qpair failed and we were unable to recover it. 00:27:07.155 [2024-12-12 10:40:41.157060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.155 [2024-12-12 10:40:41.157125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.155 [2024-12-12 10:40:41.157139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.155 [2024-12-12 10:40:41.157146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.155 [2024-12-12 10:40:41.157152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.155 [2024-12-12 10:40:41.157168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.155 qpair failed and we were unable to recover it. 00:27:07.155 [2024-12-12 10:40:41.167078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.155 [2024-12-12 10:40:41.167137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.155 [2024-12-12 10:40:41.167151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.155 [2024-12-12 10:40:41.167158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.155 [2024-12-12 10:40:41.167164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.155 [2024-12-12 10:40:41.167179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.155 qpair failed and we were unable to recover it. 00:27:07.415 [2024-12-12 10:40:41.177120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.415 [2024-12-12 10:40:41.177191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.415 [2024-12-12 10:40:41.177204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.415 [2024-12-12 10:40:41.177211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.415 [2024-12-12 10:40:41.177218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.415 [2024-12-12 10:40:41.177232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.415 qpair failed and we were unable to recover it. 00:27:07.415 [2024-12-12 10:40:41.187127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.415 [2024-12-12 10:40:41.187181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.415 [2024-12-12 10:40:41.187194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.415 [2024-12-12 10:40:41.187201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.415 [2024-12-12 10:40:41.187208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.415 [2024-12-12 10:40:41.187223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.415 qpair failed and we were unable to recover it. 00:27:07.415 [2024-12-12 10:40:41.197166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.415 [2024-12-12 10:40:41.197224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.415 [2024-12-12 10:40:41.197237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.415 [2024-12-12 10:40:41.197245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.415 [2024-12-12 10:40:41.197252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.415 [2024-12-12 10:40:41.197266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.415 qpair failed and we were unable to recover it. 00:27:07.415 [2024-12-12 10:40:41.207190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.415 [2024-12-12 10:40:41.207245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.415 [2024-12-12 10:40:41.207257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.207264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.207271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.207285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.217221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.217277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.217295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.217303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.217308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.217323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.227236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.227288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.227302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.227309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.227315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.227329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.237275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.237327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.237341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.237348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.237354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.237369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.247329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.247394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.247407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.247415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.247421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.247436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.257325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.257386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.257400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.257407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.257416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.257432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.267340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.267396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.267411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.267419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.267425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.267440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.277359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.277414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.277428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.277436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.277443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.277458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.287401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.287458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.287471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.287478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.287485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.287500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.297432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.297490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.297504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.297512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.297518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.297533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.307464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.307520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.307533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.307540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.307546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.307561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.317479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.317535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.317549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.317556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.317562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.317581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.327526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.327590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.327603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.327610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.327617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.327635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.416 qpair failed and we were unable to recover it. 00:27:07.416 [2024-12-12 10:40:41.337556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.416 [2024-12-12 10:40:41.337618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.416 [2024-12-12 10:40:41.337631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.416 [2024-12-12 10:40:41.337638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.416 [2024-12-12 10:40:41.337645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.416 [2024-12-12 10:40:41.337660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.417 [2024-12-12 10:40:41.347580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.417 [2024-12-12 10:40:41.347656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.417 [2024-12-12 10:40:41.347672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.417 [2024-12-12 10:40:41.347679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.417 [2024-12-12 10:40:41.347686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.417 [2024-12-12 10:40:41.347701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.417 [2024-12-12 10:40:41.357605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.417 [2024-12-12 10:40:41.357657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.417 [2024-12-12 10:40:41.357671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.417 [2024-12-12 10:40:41.357677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.417 [2024-12-12 10:40:41.357683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.417 [2024-12-12 10:40:41.357698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.417 [2024-12-12 10:40:41.367643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.417 [2024-12-12 10:40:41.367703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.417 [2024-12-12 10:40:41.367716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.417 [2024-12-12 10:40:41.367723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.417 [2024-12-12 10:40:41.367729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.417 [2024-12-12 10:40:41.367744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.417 [2024-12-12 10:40:41.377669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.417 [2024-12-12 10:40:41.377727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.417 [2024-12-12 10:40:41.377740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.417 [2024-12-12 10:40:41.377747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.417 [2024-12-12 10:40:41.377753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.417 [2024-12-12 10:40:41.377767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.417 [2024-12-12 10:40:41.387697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.417 [2024-12-12 10:40:41.387750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.417 [2024-12-12 10:40:41.387763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.417 [2024-12-12 10:40:41.387773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.417 [2024-12-12 10:40:41.387780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.417 [2024-12-12 10:40:41.387794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.417 [2024-12-12 10:40:41.397674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.417 [2024-12-12 10:40:41.397727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.417 [2024-12-12 10:40:41.397740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.417 [2024-12-12 10:40:41.397747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.417 [2024-12-12 10:40:41.397754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.417 [2024-12-12 10:40:41.397769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.417 [2024-12-12 10:40:41.407779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.417 [2024-12-12 10:40:41.407838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.417 [2024-12-12 10:40:41.407850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.417 [2024-12-12 10:40:41.407857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.417 [2024-12-12 10:40:41.407864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.417 [2024-12-12 10:40:41.407878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.417 [2024-12-12 10:40:41.417881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.417 [2024-12-12 10:40:41.417957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.417 [2024-12-12 10:40:41.417970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.417 [2024-12-12 10:40:41.417977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.417 [2024-12-12 10:40:41.417983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.417 [2024-12-12 10:40:41.417998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.417 [2024-12-12 10:40:41.427895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.417 [2024-12-12 10:40:41.427953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.417 [2024-12-12 10:40:41.427966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.417 [2024-12-12 10:40:41.427973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.417 [2024-12-12 10:40:41.427980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.417 [2024-12-12 10:40:41.427998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.417 qpair failed and we were unable to recover it. 00:27:07.677 [2024-12-12 10:40:41.437898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.677 [2024-12-12 10:40:41.437952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.677 [2024-12-12 10:40:41.437965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.677 [2024-12-12 10:40:41.437972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.677 [2024-12-12 10:40:41.437978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.677 [2024-12-12 10:40:41.437993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.677 qpair failed and we were unable to recover it. 00:27:07.677 [2024-12-12 10:40:41.447919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.677 [2024-12-12 10:40:41.447977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.677 [2024-12-12 10:40:41.447990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.447997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.448004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.448019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.457909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.457967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.457980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.457988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.457994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.458008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.467930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.467987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.468000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.468008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.468014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.468028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.477964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.478022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.478036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.478043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.478049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.478064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.487994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.488052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.488065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.488072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.488078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.488093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.498022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.498081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.498094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.498101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.498108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.498123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.508046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.508098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.508111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.508118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.508125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.508139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.518068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.518127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.518143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.518154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.518160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.518177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.528122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.528181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.528194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.528201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.528207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.528222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.538127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.538182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.538195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.538202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.538208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.538224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.548175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.548232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.548245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.548253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.548260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.548275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.558178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.558228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.558240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.558248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.558253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.558271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.568223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.568283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.568296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.678 [2024-12-12 10:40:41.568303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.678 [2024-12-12 10:40:41.568309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.678 [2024-12-12 10:40:41.568324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.678 qpair failed and we were unable to recover it. 00:27:07.678 [2024-12-12 10:40:41.578265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.678 [2024-12-12 10:40:41.578318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.678 [2024-12-12 10:40:41.578331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.578338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.578344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.578359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.588270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.588329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.588342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.588349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.588355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.588370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.598297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.598394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.598407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.598414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.598420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.598434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.608328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.608382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.608395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.608402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.608409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.608423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.618331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.618390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.618404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.618412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.618418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.618433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.628416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.628484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.628497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.628504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.628511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.628526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.638466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.638520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.638533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.638540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.638547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.638562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.648367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.648420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.648436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.648443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.648449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.648463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.658489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.658558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.658575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.658583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.658589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.658604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.668488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.668544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.668558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.668565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.668576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.668591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.678506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.678559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.678575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.678582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.678588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.678604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.688551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.688622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.688635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.688643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.688654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.688668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.679 [2024-12-12 10:40:41.698576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.679 [2024-12-12 10:40:41.698630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.679 [2024-12-12 10:40:41.698643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.679 [2024-12-12 10:40:41.698650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.679 [2024-12-12 10:40:41.698657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.679 [2024-12-12 10:40:41.698672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.679 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-12 10:40:41.708602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.940 [2024-12-12 10:40:41.708659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.940 [2024-12-12 10:40:41.708672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.940 [2024-12-12 10:40:41.708679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.940 [2024-12-12 10:40:41.708685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.940 [2024-12-12 10:40:41.708700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-12 10:40:41.718630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.940 [2024-12-12 10:40:41.718681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.940 [2024-12-12 10:40:41.718696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.940 [2024-12-12 10:40:41.718703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.940 [2024-12-12 10:40:41.718709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.940 [2024-12-12 10:40:41.718724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-12 10:40:41.728669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.940 [2024-12-12 10:40:41.728726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.940 [2024-12-12 10:40:41.728740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.940 [2024-12-12 10:40:41.728747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.940 [2024-12-12 10:40:41.728754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.940 [2024-12-12 10:40:41.728768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-12 10:40:41.738703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.940 [2024-12-12 10:40:41.738761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.940 [2024-12-12 10:40:41.738775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.940 [2024-12-12 10:40:41.738782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.940 [2024-12-12 10:40:41.738788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.940 [2024-12-12 10:40:41.738803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-12 10:40:41.748744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.940 [2024-12-12 10:40:41.748801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.940 [2024-12-12 10:40:41.748816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.940 [2024-12-12 10:40:41.748824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.940 [2024-12-12 10:40:41.748831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.940 [2024-12-12 10:40:41.748846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-12 10:40:41.758685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.940 [2024-12-12 10:40:41.758783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.940 [2024-12-12 10:40:41.758796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.940 [2024-12-12 10:40:41.758804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.940 [2024-12-12 10:40:41.758810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.940 [2024-12-12 10:40:41.758825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-12 10:40:41.768768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.940 [2024-12-12 10:40:41.768830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.940 [2024-12-12 10:40:41.768843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.940 [2024-12-12 10:40:41.768851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.940 [2024-12-12 10:40:41.768857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.940 [2024-12-12 10:40:41.768872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-12 10:40:41.778807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.940 [2024-12-12 10:40:41.778866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.940 [2024-12-12 10:40:41.778882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.940 [2024-12-12 10:40:41.778890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.940 [2024-12-12 10:40:41.778896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.940 [2024-12-12 10:40:41.778911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.788782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.788837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.788850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.788858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.788864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.788878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.798861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.798931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.798945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.798953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.798960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.798975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.808829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.808883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.808896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.808904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.808910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.808924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.818927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.818995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.819009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.819017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.819027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.819042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.828953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.829030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.829044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.829051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.829057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.829073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.838999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.839054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.839067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.839074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.839080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.839096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.848995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.849052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.849065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.849072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.849078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.849092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.858958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.859037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.859051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.859058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.859064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.859078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.868983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.869037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.869051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.869058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.869064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.869080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.879052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.879104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.879117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.879124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.879131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.879145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.889089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.889145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.889157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.889164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.889171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.889185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.899159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.899220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.899233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.899240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.899246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.899261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.909190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.909243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.909259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.941 [2024-12-12 10:40:41.909266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.941 [2024-12-12 10:40:41.909272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.941 [2024-12-12 10:40:41.909287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-12 10:40:41.919180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.941 [2024-12-12 10:40:41.919233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.941 [2024-12-12 10:40:41.919248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.942 [2024-12-12 10:40:41.919254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.942 [2024-12-12 10:40:41.919261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.942 [2024-12-12 10:40:41.919276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-12 10:40:41.929228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.942 [2024-12-12 10:40:41.929332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.942 [2024-12-12 10:40:41.929346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.942 [2024-12-12 10:40:41.929353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.942 [2024-12-12 10:40:41.929359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.942 [2024-12-12 10:40:41.929374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-12 10:40:41.939197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.942 [2024-12-12 10:40:41.939254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.942 [2024-12-12 10:40:41.939271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.942 [2024-12-12 10:40:41.939280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.942 [2024-12-12 10:40:41.939288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.942 [2024-12-12 10:40:41.939305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-12 10:40:41.949311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.942 [2024-12-12 10:40:41.949370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.942 [2024-12-12 10:40:41.949384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.942 [2024-12-12 10:40:41.949394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.942 [2024-12-12 10:40:41.949400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.942 [2024-12-12 10:40:41.949415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-12 10:40:41.959315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.942 [2024-12-12 10:40:41.959366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.942 [2024-12-12 10:40:41.959379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.942 [2024-12-12 10:40:41.959386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.942 [2024-12-12 10:40:41.959392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:07.942 [2024-12-12 10:40:41.959408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.942 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:41.969390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:41.969473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:41.969487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:41.969494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:41.969500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:41.969515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:41.979368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:41.979424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:41.979438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:41.979446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:41.979452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:41.979467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:41.989421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:41.989476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:41.989489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:41.989496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:41.989502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:41.989520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:41.999359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:41.999417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:41.999431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:41.999440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:41.999447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:41.999463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.009490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:42.009595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:42.009608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:42.009616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:42.009622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:42.009637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.019510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:42.019575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:42.019591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:42.019598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:42.019604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:42.019619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.029524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:42.029582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:42.029596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:42.029604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:42.029610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:42.029626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.039550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:42.039607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:42.039621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:42.039628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:42.039634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:42.039649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.049592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:42.049646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:42.049659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:42.049666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:42.049672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:42.049687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.059553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:42.059614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:42.059628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:42.059635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:42.059641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:42.059656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.069618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:42.069675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:42.069689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:42.069696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:42.069702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:42.069718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.079661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:42.079716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:42.079729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:42.079741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:42.079748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:42.079763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.089622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.203 [2024-12-12 10:40:42.089705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.203 [2024-12-12 10:40:42.089718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.203 [2024-12-12 10:40:42.089726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.203 [2024-12-12 10:40:42.089732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.203 [2024-12-12 10:40:42.089746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-12-12 10:40:42.099764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.099834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.099848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.099855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.099861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.099876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.109789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.109851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.109864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.109872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.109879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.109893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.119775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.119829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.119843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.119851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.119857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.119875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.129776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.129868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.129881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.129889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.129894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.129909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.139834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.139891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.139904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.139911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.139917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.139932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.149900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.149965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.149978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.149985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.149991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.150006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.159882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.159936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.159948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.159955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.159962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.159976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.169927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.169979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.169993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.170000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.170007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.170022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.179945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.180001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.180014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.180021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.180027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.180042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.189907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.189966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.189979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.189986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.189992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.190007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.200051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.200116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.200129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.200136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.200143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.200157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.210046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.210101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.210117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.210125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.210131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.210145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-12-12 10:40:42.220069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.204 [2024-12-12 10:40:42.220126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.204 [2024-12-12 10:40:42.220140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.204 [2024-12-12 10:40:42.220147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.204 [2024-12-12 10:40:42.220153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.204 [2024-12-12 10:40:42.220168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.473 [2024-12-12 10:40:42.230093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.473 [2024-12-12 10:40:42.230146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.473 [2024-12-12 10:40:42.230159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.473 [2024-12-12 10:40:42.230166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.473 [2024-12-12 10:40:42.230173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.473 [2024-12-12 10:40:42.230187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.473 qpair failed and we were unable to recover it. 00:27:08.473 [2024-12-12 10:40:42.240166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.473 [2024-12-12 10:40:42.240236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.473 [2024-12-12 10:40:42.240249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.473 [2024-12-12 10:40:42.240256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.473 [2024-12-12 10:40:42.240262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.473 [2024-12-12 10:40:42.240277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.473 qpair failed and we were unable to recover it. 00:27:08.473 [2024-12-12 10:40:42.250178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.473 [2024-12-12 10:40:42.250232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.473 [2024-12-12 10:40:42.250245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.473 [2024-12-12 10:40:42.250252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.473 [2024-12-12 10:40:42.250261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.473 [2024-12-12 10:40:42.250276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.473 qpair failed and we were unable to recover it. 00:27:08.473 [2024-12-12 10:40:42.260171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.473 [2024-12-12 10:40:42.260227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.473 [2024-12-12 10:40:42.260240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.473 [2024-12-12 10:40:42.260247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.473 [2024-12-12 10:40:42.260253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.473 [2024-12-12 10:40:42.260268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.473 qpair failed and we were unable to recover it. 00:27:08.473 [2024-12-12 10:40:42.270206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.473 [2024-12-12 10:40:42.270288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.473 [2024-12-12 10:40:42.270302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.473 [2024-12-12 10:40:42.270310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.473 [2024-12-12 10:40:42.270316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.473 [2024-12-12 10:40:42.270331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.473 qpair failed and we were unable to recover it. 00:27:08.473 [2024-12-12 10:40:42.280213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.473 [2024-12-12 10:40:42.280268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.473 [2024-12-12 10:40:42.280281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.473 [2024-12-12 10:40:42.280288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.473 [2024-12-12 10:40:42.280294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.473 [2024-12-12 10:40:42.280308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.473 qpair failed and we were unable to recover it. 00:27:08.473 [2024-12-12 10:40:42.290260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.473 [2024-12-12 10:40:42.290314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.473 [2024-12-12 10:40:42.290328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.473 [2024-12-12 10:40:42.290335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.473 [2024-12-12 10:40:42.290341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.473 [2024-12-12 10:40:42.290355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.473 qpair failed and we were unable to recover it. 00:27:08.473 [2024-12-12 10:40:42.300308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.300392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.300405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.300412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.300418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.300433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.310321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.310373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.310386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.310394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.310401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.310416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.320376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.320434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.320448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.320455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.320461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.320476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.330366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.330424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.330437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.330444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.330450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.330464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.340405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.340504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.340521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.340528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.340534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.340549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.350420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.350480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.350494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.350501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.350507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.350521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.360445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.360503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.360517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.360524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.360531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.360545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.370510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.370566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.370584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.370591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.370598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.370613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.380521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.380583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.380597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.380604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.380615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.380631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.390535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.390588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.390602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.390609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.390615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.390630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.400576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.400635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.400648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.400655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.400661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.400676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.410595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.410652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.410665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.410671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.410678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.410693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.420628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.420689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.420703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.420711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.420717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.420732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.430636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.430691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.430704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.430711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.430717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.430732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.440674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.440731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.440744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.440752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.440758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.440773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.450701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.474 [2024-12-12 10:40:42.450760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.474 [2024-12-12 10:40:42.450773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.474 [2024-12-12 10:40:42.450780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.474 [2024-12-12 10:40:42.450787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.474 [2024-12-12 10:40:42.450801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.474 qpair failed and we were unable to recover it. 00:27:08.474 [2024-12-12 10:40:42.460776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.475 [2024-12-12 10:40:42.460839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.475 [2024-12-12 10:40:42.460851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.475 [2024-12-12 10:40:42.460859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.475 [2024-12-12 10:40:42.460865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.475 [2024-12-12 10:40:42.460880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.475 qpair failed and we were unable to recover it. 00:27:08.475 [2024-12-12 10:40:42.470761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.475 [2024-12-12 10:40:42.470819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.475 [2024-12-12 10:40:42.470832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.475 [2024-12-12 10:40:42.470839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.475 [2024-12-12 10:40:42.470845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.475 [2024-12-12 10:40:42.470860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.475 qpair failed and we were unable to recover it. 00:27:08.475 [2024-12-12 10:40:42.480795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.475 [2024-12-12 10:40:42.480852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.475 [2024-12-12 10:40:42.480865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.475 [2024-12-12 10:40:42.480872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.475 [2024-12-12 10:40:42.480878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.475 [2024-12-12 10:40:42.480893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.475 qpair failed and we were unable to recover it. 00:27:08.475 [2024-12-12 10:40:42.490812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.475 [2024-12-12 10:40:42.490870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.475 [2024-12-12 10:40:42.490883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.475 [2024-12-12 10:40:42.490890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.475 [2024-12-12 10:40:42.490897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.475 [2024-12-12 10:40:42.490911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.475 qpair failed and we were unable to recover it. 00:27:08.734 [2024-12-12 10:40:42.500847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.734 [2024-12-12 10:40:42.500901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.734 [2024-12-12 10:40:42.500914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.734 [2024-12-12 10:40:42.500921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.734 [2024-12-12 10:40:42.500927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.734 [2024-12-12 10:40:42.500942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.734 qpair failed and we were unable to recover it. 00:27:08.734 [2024-12-12 10:40:42.510956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.734 [2024-12-12 10:40:42.511042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.734 [2024-12-12 10:40:42.511056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.734 [2024-12-12 10:40:42.511066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.734 [2024-12-12 10:40:42.511072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.734 [2024-12-12 10:40:42.511087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.734 qpair failed and we were unable to recover it. 00:27:08.734 [2024-12-12 10:40:42.520889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.734 [2024-12-12 10:40:42.520951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.734 [2024-12-12 10:40:42.520966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.734 [2024-12-12 10:40:42.520974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.734 [2024-12-12 10:40:42.520980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.734 [2024-12-12 10:40:42.520995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.734 qpair failed and we were unable to recover it. 00:27:08.734 [2024-12-12 10:40:42.530923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.734 [2024-12-12 10:40:42.530987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.734 [2024-12-12 10:40:42.530999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.734 [2024-12-12 10:40:42.531007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.734 [2024-12-12 10:40:42.531013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.734 [2024-12-12 10:40:42.531028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.734 qpair failed and we were unable to recover it. 00:27:08.734 [2024-12-12 10:40:42.540951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.734 [2024-12-12 10:40:42.541004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.734 [2024-12-12 10:40:42.541018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.734 [2024-12-12 10:40:42.541025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.734 [2024-12-12 10:40:42.541032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.734 [2024-12-12 10:40:42.541046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.734 qpair failed and we were unable to recover it. 00:27:08.734 [2024-12-12 10:40:42.550975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.734 [2024-12-12 10:40:42.551026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.734 [2024-12-12 10:40:42.551039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.734 [2024-12-12 10:40:42.551046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.734 [2024-12-12 10:40:42.551053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.734 [2024-12-12 10:40:42.551071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.734 qpair failed and we were unable to recover it. 00:27:08.734 [2024-12-12 10:40:42.560929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.734 [2024-12-12 10:40:42.560980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.734 [2024-12-12 10:40:42.560993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.734 [2024-12-12 10:40:42.561000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.734 [2024-12-12 10:40:42.561006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.734 [2024-12-12 10:40:42.561021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.734 qpair failed and we were unable to recover it. 00:27:08.734 [2024-12-12 10:40:42.571005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.571082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.571096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.571103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.571109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.571123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.581064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.581118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.581130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.581137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.581144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.581158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.591096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.591197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.591210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.591217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.591222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.591236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.601134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.601208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.601222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.601229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.601235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.601250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.611160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.611214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.611227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.611234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.611241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.611256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.621148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.621231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.621244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.621252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.621258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.621272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.631218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.631271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.631284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.631291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.631296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.631312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.641253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.641302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.641315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.641325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.641331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.641346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.651280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.651335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.651348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.651355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.651361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.651375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.661306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.661370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.661383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.661390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.661396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.661411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.671372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.671429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.671442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.671449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.671455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.671470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.681383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.681434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.681447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.681454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.681460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.681478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.691400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.691457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.691469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.691476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.735 [2024-12-12 10:40:42.691482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.735 [2024-12-12 10:40:42.691497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.735 qpair failed and we were unable to recover it. 00:27:08.735 [2024-12-12 10:40:42.701470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.735 [2024-12-12 10:40:42.701534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.735 [2024-12-12 10:40:42.701548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.735 [2024-12-12 10:40:42.701555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.736 [2024-12-12 10:40:42.701561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.736 [2024-12-12 10:40:42.701581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.736 qpair failed and we were unable to recover it. 00:27:08.736 [2024-12-12 10:40:42.711434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.736 [2024-12-12 10:40:42.711488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.736 [2024-12-12 10:40:42.711500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.736 [2024-12-12 10:40:42.711507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.736 [2024-12-12 10:40:42.711513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.736 [2024-12-12 10:40:42.711528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.736 qpair failed and we were unable to recover it. 00:27:08.736 [2024-12-12 10:40:42.721471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.736 [2024-12-12 10:40:42.721520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.736 [2024-12-12 10:40:42.721534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.736 [2024-12-12 10:40:42.721541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.736 [2024-12-12 10:40:42.721547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.736 [2024-12-12 10:40:42.721562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.736 qpair failed and we were unable to recover it. 00:27:08.736 [2024-12-12 10:40:42.731476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.736 [2024-12-12 10:40:42.731536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.736 [2024-12-12 10:40:42.731549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.736 [2024-12-12 10:40:42.731556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.736 [2024-12-12 10:40:42.731562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.736 [2024-12-12 10:40:42.731582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.736 qpair failed and we were unable to recover it. 00:27:08.736 [2024-12-12 10:40:42.741552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.736 [2024-12-12 10:40:42.741619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.736 [2024-12-12 10:40:42.741633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.736 [2024-12-12 10:40:42.741641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.736 [2024-12-12 10:40:42.741647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.736 [2024-12-12 10:40:42.741662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.736 qpair failed and we were unable to recover it. 00:27:08.736 [2024-12-12 10:40:42.751474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.736 [2024-12-12 10:40:42.751542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.736 [2024-12-12 10:40:42.751555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.736 [2024-12-12 10:40:42.751563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.736 [2024-12-12 10:40:42.751572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.736 [2024-12-12 10:40:42.751587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.736 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-12 10:40:42.761609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.995 [2024-12-12 10:40:42.761667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.995 [2024-12-12 10:40:42.761681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.995 [2024-12-12 10:40:42.761687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.995 [2024-12-12 10:40:42.761693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.995 [2024-12-12 10:40:42.761708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-12 10:40:42.771623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.995 [2024-12-12 10:40:42.771678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.995 [2024-12-12 10:40:42.771695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.995 [2024-12-12 10:40:42.771703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.995 [2024-12-12 10:40:42.771709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.995 [2024-12-12 10:40:42.771724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-12 10:40:42.781634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.995 [2024-12-12 10:40:42.781690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.995 [2024-12-12 10:40:42.781703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.995 [2024-12-12 10:40:42.781710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.995 [2024-12-12 10:40:42.781717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.995 [2024-12-12 10:40:42.781731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-12 10:40:42.791654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.995 [2024-12-12 10:40:42.791709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.995 [2024-12-12 10:40:42.791722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.995 [2024-12-12 10:40:42.791728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.995 [2024-12-12 10:40:42.791734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.995 [2024-12-12 10:40:42.791749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-12 10:40:42.801707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.801774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.801788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.801795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.801800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.801814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.811728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.811781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.811794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.811801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.811809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.811824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.821755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.821805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.821820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.821827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.821833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.821848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.831774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.831868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.831881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.831888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.831894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.831909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.841800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.841855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.841868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.841875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.841881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.841896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.851765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.851826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.851840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.851847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.851852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.851867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.861819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.861878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.861891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.861898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.861905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.861920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.871892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.871948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.871962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.871970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.871977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.871992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.881941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.881998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.882011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.882019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.882025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.882040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.891964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.892021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.892034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.892041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.892047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.892062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.902017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.902102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.902119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.902126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.902132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.902146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.911943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.912000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.912014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.912022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.912029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.912044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.922067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.922122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.922137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.996 [2024-12-12 10:40:42.922144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.996 [2024-12-12 10:40:42.922151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.996 [2024-12-12 10:40:42.922166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-12 10:40:42.932050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.996 [2024-12-12 10:40:42.932114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.996 [2024-12-12 10:40:42.932128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.997 [2024-12-12 10:40:42.932134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.997 [2024-12-12 10:40:42.932140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.997 [2024-12-12 10:40:42.932155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-12 10:40:42.942103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.997 [2024-12-12 10:40:42.942156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.997 [2024-12-12 10:40:42.942169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.997 [2024-12-12 10:40:42.942176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.997 [2024-12-12 10:40:42.942186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.997 [2024-12-12 10:40:42.942200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-12 10:40:42.952128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.997 [2024-12-12 10:40:42.952195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.997 [2024-12-12 10:40:42.952208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.997 [2024-12-12 10:40:42.952216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.997 [2024-12-12 10:40:42.952222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.997 [2024-12-12 10:40:42.952237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-12 10:40:42.962191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.997 [2024-12-12 10:40:42.962257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.997 [2024-12-12 10:40:42.962270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.997 [2024-12-12 10:40:42.962277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.997 [2024-12-12 10:40:42.962284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.997 [2024-12-12 10:40:42.962299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-12 10:40:42.972196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.997 [2024-12-12 10:40:42.972282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.997 [2024-12-12 10:40:42.972295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.997 [2024-12-12 10:40:42.972302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.997 [2024-12-12 10:40:42.972308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.997 [2024-12-12 10:40:42.972322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-12 10:40:42.982222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.997 [2024-12-12 10:40:42.982279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.997 [2024-12-12 10:40:42.982292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.997 [2024-12-12 10:40:42.982299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.997 [2024-12-12 10:40:42.982306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.997 [2024-12-12 10:40:42.982319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-12 10:40:42.992192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.997 [2024-12-12 10:40:42.992276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.997 [2024-12-12 10:40:42.992289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.997 [2024-12-12 10:40:42.992296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.997 [2024-12-12 10:40:42.992302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.997 [2024-12-12 10:40:42.992316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-12 10:40:43.002303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.997 [2024-12-12 10:40:43.002355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.997 [2024-12-12 10:40:43.002368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.997 [2024-12-12 10:40:43.002375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.997 [2024-12-12 10:40:43.002381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.997 [2024-12-12 10:40:43.002397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-12 10:40:43.012230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.997 [2024-12-12 10:40:43.012298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.997 [2024-12-12 10:40:43.012311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.997 [2024-12-12 10:40:43.012318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.997 [2024-12-12 10:40:43.012324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:08.997 [2024-12-12 10:40:43.012339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.997 qpair failed and we were unable to recover it. 00:27:09.257 [2024-12-12 10:40:43.022270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.257 [2024-12-12 10:40:43.022349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.257 [2024-12-12 10:40:43.022364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.257 [2024-12-12 10:40:43.022371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.257 [2024-12-12 10:40:43.022377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.257 [2024-12-12 10:40:43.022393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-12-12 10:40:43.032362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.257 [2024-12-12 10:40:43.032431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.257 [2024-12-12 10:40:43.032444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.257 [2024-12-12 10:40:43.032451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.257 [2024-12-12 10:40:43.032457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.257 [2024-12-12 10:40:43.032472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-12-12 10:40:43.042387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.257 [2024-12-12 10:40:43.042434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.257 [2024-12-12 10:40:43.042448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.257 [2024-12-12 10:40:43.042455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.257 [2024-12-12 10:40:43.042461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.257 [2024-12-12 10:40:43.042476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-12-12 10:40:43.052427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.257 [2024-12-12 10:40:43.052484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.257 [2024-12-12 10:40:43.052497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.257 [2024-12-12 10:40:43.052504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.257 [2024-12-12 10:40:43.052510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.257 [2024-12-12 10:40:43.052525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.257 qpair failed and we were unable to recover it. 00:27:09.257 [2024-12-12 10:40:43.062447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.257 [2024-12-12 10:40:43.062500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.257 [2024-12-12 10:40:43.062514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.062521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.062527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.062542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.072487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.072540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.072553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.072564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.072574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.072589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.082491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.082546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.082559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.082566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.082577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.082593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.092528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.092589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.092602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.092609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.092616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.092630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.102564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.102629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.102642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.102649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.102655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.102670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.112580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.112635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.112648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.112655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.112661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.112680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.122603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.122655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.122669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.122676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.122682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.122697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.132646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.132703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.132716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.132723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.132730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.132745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.142721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.142790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.142803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.142810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.142816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.142831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.152689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.152745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.152759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.152768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.152776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.152792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.162737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.162800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.162813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.162820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.162827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.162841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.172769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.172864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.172878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.172885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.172892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.172907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.182775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.182840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.182853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.182861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.182867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.182883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.258 qpair failed and we were unable to recover it. 00:27:09.258 [2024-12-12 10:40:43.192817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.258 [2024-12-12 10:40:43.192880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.258 [2024-12-12 10:40:43.192893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.258 [2024-12-12 10:40:43.192900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.258 [2024-12-12 10:40:43.192906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.258 [2024-12-12 10:40:43.192921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-12-12 10:40:43.202827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.259 [2024-12-12 10:40:43.202914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.259 [2024-12-12 10:40:43.202932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.259 [2024-12-12 10:40:43.202939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.259 [2024-12-12 10:40:43.202945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.259 [2024-12-12 10:40:43.202960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-12-12 10:40:43.212869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.259 [2024-12-12 10:40:43.212924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.259 [2024-12-12 10:40:43.212938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.259 [2024-12-12 10:40:43.212944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.259 [2024-12-12 10:40:43.212950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.259 [2024-12-12 10:40:43.212965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-12-12 10:40:43.222879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.259 [2024-12-12 10:40:43.222937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.259 [2024-12-12 10:40:43.222951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.259 [2024-12-12 10:40:43.222958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.259 [2024-12-12 10:40:43.222964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.259 [2024-12-12 10:40:43.222978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-12-12 10:40:43.232878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.259 [2024-12-12 10:40:43.232966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.259 [2024-12-12 10:40:43.232979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.259 [2024-12-12 10:40:43.232986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.259 [2024-12-12 10:40:43.232992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.259 [2024-12-12 10:40:43.233006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-12-12 10:40:43.242915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.259 [2024-12-12 10:40:43.242995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.259 [2024-12-12 10:40:43.243009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.259 [2024-12-12 10:40:43.243016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.259 [2024-12-12 10:40:43.243022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.259 [2024-12-12 10:40:43.243039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-12-12 10:40:43.252998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.259 [2024-12-12 10:40:43.253079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.259 [2024-12-12 10:40:43.253092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.259 [2024-12-12 10:40:43.253099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.259 [2024-12-12 10:40:43.253105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.259 [2024-12-12 10:40:43.253119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-12-12 10:40:43.263012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.259 [2024-12-12 10:40:43.263067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.259 [2024-12-12 10:40:43.263081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.259 [2024-12-12 10:40:43.263087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.259 [2024-12-12 10:40:43.263094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.259 [2024-12-12 10:40:43.263108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.259 [2024-12-12 10:40:43.273059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.259 [2024-12-12 10:40:43.273115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.259 [2024-12-12 10:40:43.273130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.259 [2024-12-12 10:40:43.273137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.259 [2024-12-12 10:40:43.273143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.259 [2024-12-12 10:40:43.273159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.259 qpair failed and we were unable to recover it. 00:27:09.518 [2024-12-12 10:40:43.283078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.518 [2024-12-12 10:40:43.283170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.518 [2024-12-12 10:40:43.283183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.518 [2024-12-12 10:40:43.283190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.518 [2024-12-12 10:40:43.283196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.518 [2024-12-12 10:40:43.283210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.518 qpair failed and we were unable to recover it. 00:27:09.518 [2024-12-12 10:40:43.293041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.518 [2024-12-12 10:40:43.293129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.293142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.293149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.293156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.293171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.303120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.303181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.303194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.303201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.303208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.303223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.313153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.313211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.313225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.313232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.313239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.313254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.323168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.323224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.323237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.323244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.323251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.323265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.333163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.333222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.333238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.333245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.333251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.333266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.343295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.343351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.343381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.343389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.343395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.343412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.353215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.353270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.353283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.353291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.353297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.353311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.363254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.363311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.363325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.363331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.363338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.363352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.373331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.373420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.373434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.373441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.373449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.373465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.383276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.383364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.383378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.383385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.383391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.383405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.393362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.393417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.393430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.393437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.393443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.393457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.403459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.403565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.403584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.403591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.403597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.403612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.413489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.413550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.413562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.413574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.519 [2024-12-12 10:40:43.413581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.519 [2024-12-12 10:40:43.413596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.519 qpair failed and we were unable to recover it. 00:27:09.519 [2024-12-12 10:40:43.423581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.519 [2024-12-12 10:40:43.423648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.519 [2024-12-12 10:40:43.423662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.519 [2024-12-12 10:40:43.423669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.423675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.423691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.433546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.433604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.433618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.433625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.433631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.433646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.443538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.443634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.443647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.443654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.443660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.443675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.453603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.453660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.453673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.453680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.453686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.453700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.463574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.463626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.463642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.463649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.463655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.463669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.473587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.473655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.473668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.473674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.473681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.473696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.483679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.483778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.483791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.483799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.483805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.483820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.493660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.493714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.493727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.493734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.493740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.493755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.503734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.503806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.503818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.503829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.503836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.503851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.513684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.513782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.513795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.513803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.513809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.513824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.523707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.523761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.523776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.523784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.523790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.523806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.520 [2024-12-12 10:40:43.533773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.520 [2024-12-12 10:40:43.533829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.520 [2024-12-12 10:40:43.533842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.520 [2024-12-12 10:40:43.533849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.520 [2024-12-12 10:40:43.533856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.520 [2024-12-12 10:40:43.533871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.520 qpair failed and we were unable to recover it. 00:27:09.781 [2024-12-12 10:40:43.543791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.781 [2024-12-12 10:40:43.543880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.781 [2024-12-12 10:40:43.543893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.781 [2024-12-12 10:40:43.543900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.781 [2024-12-12 10:40:43.543906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.781 [2024-12-12 10:40:43.543922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-12-12 10:40:43.553849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.781 [2024-12-12 10:40:43.553908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.781 [2024-12-12 10:40:43.553921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.781 [2024-12-12 10:40:43.553929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.781 [2024-12-12 10:40:43.553935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.781 [2024-12-12 10:40:43.553949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-12-12 10:40:43.563851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.781 [2024-12-12 10:40:43.563906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.781 [2024-12-12 10:40:43.563920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.781 [2024-12-12 10:40:43.563927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.781 [2024-12-12 10:40:43.563933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.781 [2024-12-12 10:40:43.563948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-12-12 10:40:43.573882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.781 [2024-12-12 10:40:43.573937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.781 [2024-12-12 10:40:43.573950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.781 [2024-12-12 10:40:43.573957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.781 [2024-12-12 10:40:43.573963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.781 [2024-12-12 10:40:43.573979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-12-12 10:40:43.583961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.781 [2024-12-12 10:40:43.584029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.781 [2024-12-12 10:40:43.584042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.781 [2024-12-12 10:40:43.584049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.781 [2024-12-12 10:40:43.584056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.781 [2024-12-12 10:40:43.584070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-12-12 10:40:43.593934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.781 [2024-12-12 10:40:43.593993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.781 [2024-12-12 10:40:43.594006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.781 [2024-12-12 10:40:43.594013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.781 [2024-12-12 10:40:43.594020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.781 [2024-12-12 10:40:43.594036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-12-12 10:40:43.603957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.781 [2024-12-12 10:40:43.604065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.781 [2024-12-12 10:40:43.604079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.781 [2024-12-12 10:40:43.604086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.781 [2024-12-12 10:40:43.604091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.781 [2024-12-12 10:40:43.604106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-12-12 10:40:43.614002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.781 [2024-12-12 10:40:43.614062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.781 [2024-12-12 10:40:43.614076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.781 [2024-12-12 10:40:43.614083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.781 [2024-12-12 10:40:43.614090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.781 [2024-12-12 10:40:43.614105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-12-12 10:40:43.624022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.781 [2024-12-12 10:40:43.624076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.624090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.624097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.624103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.624118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.634046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.634101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.634113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.634124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.634130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.634145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.644119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.644171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.644183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.644190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.644197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.644212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.654119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.654195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.654208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.654216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.654222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.654236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.664141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.664199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.664211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.664218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.664224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.664239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.674159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.674213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.674225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.674233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.674238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.674256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.684212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.684283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.684298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.684305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.684311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.684326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.694227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.694280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.694293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.694300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.694307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.694322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.704259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.704311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.704324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.704331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.704338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.704353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.714292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.714343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.714357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.714364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.714370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.714384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.724306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.724358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.724372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.724379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.724385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.724400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.734364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.734420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.734433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.734440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.734446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.734461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.744396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.744448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.744461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.744468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.744475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.782 [2024-12-12 10:40:43.744490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-12-12 10:40:43.754434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.782 [2024-12-12 10:40:43.754484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.782 [2024-12-12 10:40:43.754498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.782 [2024-12-12 10:40:43.754506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.782 [2024-12-12 10:40:43.754512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.783 [2024-12-12 10:40:43.754526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-12-12 10:40:43.764424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.783 [2024-12-12 10:40:43.764508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.783 [2024-12-12 10:40:43.764524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.783 [2024-12-12 10:40:43.764532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.783 [2024-12-12 10:40:43.764538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.783 [2024-12-12 10:40:43.764553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-12-12 10:40:43.774467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.783 [2024-12-12 10:40:43.774548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.783 [2024-12-12 10:40:43.774562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.783 [2024-12-12 10:40:43.774573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.783 [2024-12-12 10:40:43.774580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.783 [2024-12-12 10:40:43.774595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-12-12 10:40:43.784492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.783 [2024-12-12 10:40:43.784546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.783 [2024-12-12 10:40:43.784559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.783 [2024-12-12 10:40:43.784566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.783 [2024-12-12 10:40:43.784576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.783 [2024-12-12 10:40:43.784593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-12-12 10:40:43.794513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.783 [2024-12-12 10:40:43.794575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.783 [2024-12-12 10:40:43.794588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.783 [2024-12-12 10:40:43.794595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.783 [2024-12-12 10:40:43.794602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:09.783 [2024-12-12 10:40:43.794617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.783 qpair failed and we were unable to recover it. 00:27:10.043 [2024-12-12 10:40:43.804557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.043 [2024-12-12 10:40:43.804617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.043 [2024-12-12 10:40:43.804631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.043 [2024-12-12 10:40:43.804638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.043 [2024-12-12 10:40:43.804644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.043 [2024-12-12 10:40:43.804662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.043 qpair failed and we were unable to recover it. 00:27:10.043 [2024-12-12 10:40:43.814598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.043 [2024-12-12 10:40:43.814656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.043 [2024-12-12 10:40:43.814670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.043 [2024-12-12 10:40:43.814677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.043 [2024-12-12 10:40:43.814684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.043 [2024-12-12 10:40:43.814699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.043 qpair failed and we were unable to recover it. 00:27:10.043 [2024-12-12 10:40:43.824605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.043 [2024-12-12 10:40:43.824660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.043 [2024-12-12 10:40:43.824673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.043 [2024-12-12 10:40:43.824680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.043 [2024-12-12 10:40:43.824686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.043 [2024-12-12 10:40:43.824701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.043 qpair failed and we were unable to recover it. 00:27:10.043 [2024-12-12 10:40:43.834635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.043 [2024-12-12 10:40:43.834690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.043 [2024-12-12 10:40:43.834703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.043 [2024-12-12 10:40:43.834710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.043 [2024-12-12 10:40:43.834717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.043 [2024-12-12 10:40:43.834733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.043 qpair failed and we were unable to recover it. 00:27:10.043 [2024-12-12 10:40:43.844653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.043 [2024-12-12 10:40:43.844714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.043 [2024-12-12 10:40:43.844727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.043 [2024-12-12 10:40:43.844734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.043 [2024-12-12 10:40:43.844740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.043 [2024-12-12 10:40:43.844756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.043 qpair failed and we were unable to recover it. 00:27:10.043 [2024-12-12 10:40:43.854749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.043 [2024-12-12 10:40:43.854808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.043 [2024-12-12 10:40:43.854822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.043 [2024-12-12 10:40:43.854829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.043 [2024-12-12 10:40:43.854835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.043 [2024-12-12 10:40:43.854851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.043 qpair failed and we were unable to recover it. 00:27:10.043 [2024-12-12 10:40:43.864730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.043 [2024-12-12 10:40:43.864783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.043 [2024-12-12 10:40:43.864795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.043 [2024-12-12 10:40:43.864803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.043 [2024-12-12 10:40:43.864809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.043 [2024-12-12 10:40:43.864824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.043 qpair failed and we were unable to recover it. 00:27:10.043 [2024-12-12 10:40:43.874751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.043 [2024-12-12 10:40:43.874804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.043 [2024-12-12 10:40:43.874817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.874825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.874831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.874846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.884777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.884832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.884845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.884852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.884859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.884873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.894815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.894872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.894888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.894896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.894902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.894917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.904837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.904893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.904907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.904914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.904920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.904936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.914859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.914916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.914929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.914937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.914943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.914957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.924825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.924879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.924892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.924899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.924906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.924921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.934919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.934983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.934996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.935003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.935015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.935030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.944989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.945052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.945065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.945072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.945079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.945094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.954997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.955049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.955062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.955070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.955076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.955091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.965026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.965081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.965094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.965101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.965107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.965123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.975065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.975125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.975137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.975145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.975151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.975165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.985050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.985109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.985122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.985130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.985136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.985150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:43.995101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:43.995184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:43.995197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:43.995205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:43.995211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:43.995225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.044 [2024-12-12 10:40:44.005111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.044 [2024-12-12 10:40:44.005166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.044 [2024-12-12 10:40:44.005179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.044 [2024-12-12 10:40:44.005186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.044 [2024-12-12 10:40:44.005193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.044 [2024-12-12 10:40:44.005208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.044 qpair failed and we were unable to recover it. 00:27:10.045 [2024-12-12 10:40:44.015142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.045 [2024-12-12 10:40:44.015234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.045 [2024-12-12 10:40:44.015248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.045 [2024-12-12 10:40:44.015255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.045 [2024-12-12 10:40:44.015262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.045 [2024-12-12 10:40:44.015276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.045 qpair failed and we were unable to recover it. 00:27:10.045 [2024-12-12 10:40:44.025167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.045 [2024-12-12 10:40:44.025222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.045 [2024-12-12 10:40:44.025239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.045 [2024-12-12 10:40:44.025246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.045 [2024-12-12 10:40:44.025252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.045 [2024-12-12 10:40:44.025267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.045 qpair failed and we were unable to recover it. 00:27:10.045 [2024-12-12 10:40:44.035214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.045 [2024-12-12 10:40:44.035268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.045 [2024-12-12 10:40:44.035282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.045 [2024-12-12 10:40:44.035289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.045 [2024-12-12 10:40:44.035295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.045 [2024-12-12 10:40:44.035310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.045 qpair failed and we were unable to recover it. 00:27:10.045 [2024-12-12 10:40:44.045227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.045 [2024-12-12 10:40:44.045285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.045 [2024-12-12 10:40:44.045298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.045 [2024-12-12 10:40:44.045306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.045 [2024-12-12 10:40:44.045312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.045 [2024-12-12 10:40:44.045327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.045 qpair failed and we were unable to recover it. 00:27:10.045 [2024-12-12 10:40:44.055247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.045 [2024-12-12 10:40:44.055301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.045 [2024-12-12 10:40:44.055314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.045 [2024-12-12 10:40:44.055321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.045 [2024-12-12 10:40:44.055327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.045 [2024-12-12 10:40:44.055342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.045 qpair failed and we were unable to recover it. 00:27:10.305 [2024-12-12 10:40:44.065278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.305 [2024-12-12 10:40:44.065335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.305 [2024-12-12 10:40:44.065348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.305 [2024-12-12 10:40:44.065358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.305 [2024-12-12 10:40:44.065365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.305 [2024-12-12 10:40:44.065379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.305 qpair failed and we were unable to recover it. 00:27:10.305 [2024-12-12 10:40:44.075302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.305 [2024-12-12 10:40:44.075359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.305 [2024-12-12 10:40:44.075372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.305 [2024-12-12 10:40:44.075381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.305 [2024-12-12 10:40:44.075387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.305 [2024-12-12 10:40:44.075401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.305 qpair failed and we were unable to recover it. 00:27:10.305 [2024-12-12 10:40:44.085382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.305 [2024-12-12 10:40:44.085446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.305 [2024-12-12 10:40:44.085459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.305 [2024-12-12 10:40:44.085466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.305 [2024-12-12 10:40:44.085473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.305 [2024-12-12 10:40:44.085488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.305 qpair failed and we were unable to recover it. 00:27:10.305 [2024-12-12 10:40:44.095373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.305 [2024-12-12 10:40:44.095480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.305 [2024-12-12 10:40:44.095494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.305 [2024-12-12 10:40:44.095502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.305 [2024-12-12 10:40:44.095507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.305 [2024-12-12 10:40:44.095522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.305 qpair failed and we were unable to recover it. 00:27:10.305 [2024-12-12 10:40:44.105400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.305 [2024-12-12 10:40:44.105456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.305 [2024-12-12 10:40:44.105469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.305 [2024-12-12 10:40:44.105476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.105483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.105497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.115421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.115474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.115487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.115494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.115501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.115516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.125429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.125494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.125508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.125515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.125521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.125535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.135490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.135548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.135561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.135571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.135578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.135592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.145510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.145565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.145582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.145589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.145595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.145610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.155585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.155684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.155698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.155705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.155710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.155725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.165577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.165657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.165671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.165678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.165684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.165698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.175604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.175661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.175674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.175680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.175686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.175701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.185634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.185689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.185702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.185710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.185716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.185730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.195671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.195726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.195739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.195750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.195756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.195770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.205727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.205786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.205799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.205807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.205813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.205828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.215729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.215815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.215829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.215836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.215842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.215857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.225743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.225798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.225811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.225818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.225825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.225839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.306 [2024-12-12 10:40:44.235776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.306 [2024-12-12 10:40:44.235831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.306 [2024-12-12 10:40:44.235844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.306 [2024-12-12 10:40:44.235851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.306 [2024-12-12 10:40:44.235858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.306 [2024-12-12 10:40:44.235876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.306 qpair failed and we were unable to recover it. 00:27:10.307 [2024-12-12 10:40:44.245810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.307 [2024-12-12 10:40:44.245874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.307 [2024-12-12 10:40:44.245887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.307 [2024-12-12 10:40:44.245895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.307 [2024-12-12 10:40:44.245901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.307 [2024-12-12 10:40:44.245915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.307 qpair failed and we were unable to recover it. 00:27:10.307 [2024-12-12 10:40:44.255838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.307 [2024-12-12 10:40:44.255896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.307 [2024-12-12 10:40:44.255908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.307 [2024-12-12 10:40:44.255915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.307 [2024-12-12 10:40:44.255922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.307 [2024-12-12 10:40:44.255936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.307 qpair failed and we were unable to recover it. 00:27:10.307 [2024-12-12 10:40:44.265856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.307 [2024-12-12 10:40:44.265913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.307 [2024-12-12 10:40:44.265926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.307 [2024-12-12 10:40:44.265933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.307 [2024-12-12 10:40:44.265940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.307 [2024-12-12 10:40:44.265955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.307 qpair failed and we were unable to recover it. 00:27:10.307 [2024-12-12 10:40:44.275888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.307 [2024-12-12 10:40:44.275965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.307 [2024-12-12 10:40:44.275979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.307 [2024-12-12 10:40:44.275987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.307 [2024-12-12 10:40:44.275993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.307 [2024-12-12 10:40:44.276007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.307 qpair failed and we were unable to recover it. 00:27:10.307 [2024-12-12 10:40:44.285888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.307 [2024-12-12 10:40:44.285938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.307 [2024-12-12 10:40:44.285952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.307 [2024-12-12 10:40:44.285959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.307 [2024-12-12 10:40:44.285965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.307 [2024-12-12 10:40:44.285979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.307 qpair failed and we were unable to recover it. 00:27:10.307 [2024-12-12 10:40:44.295949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.307 [2024-12-12 10:40:44.296004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.307 [2024-12-12 10:40:44.296017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.307 [2024-12-12 10:40:44.296024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.307 [2024-12-12 10:40:44.296030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.307 [2024-12-12 10:40:44.296045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.307 qpair failed and we were unable to recover it. 00:27:10.307 [2024-12-12 10:40:44.305977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.307 [2024-12-12 10:40:44.306046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.307 [2024-12-12 10:40:44.306059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.307 [2024-12-12 10:40:44.306067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.307 [2024-12-12 10:40:44.306073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.307 [2024-12-12 10:40:44.306088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.307 qpair failed and we were unable to recover it. 00:27:10.307 [2024-12-12 10:40:44.315999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.307 [2024-12-12 10:40:44.316054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.307 [2024-12-12 10:40:44.316067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.307 [2024-12-12 10:40:44.316075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.307 [2024-12-12 10:40:44.316081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.307 [2024-12-12 10:40:44.316097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.307 qpair failed and we were unable to recover it. 00:27:10.307 [2024-12-12 10:40:44.326013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.307 [2024-12-12 10:40:44.326063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.307 [2024-12-12 10:40:44.326080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.307 [2024-12-12 10:40:44.326087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.307 [2024-12-12 10:40:44.326093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.307 [2024-12-12 10:40:44.326108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.307 qpair failed and we were unable to recover it. 00:27:10.567 [2024-12-12 10:40:44.336045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.567 [2024-12-12 10:40:44.336102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.567 [2024-12-12 10:40:44.336115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.567 [2024-12-12 10:40:44.336122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.567 [2024-12-12 10:40:44.336129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.567 [2024-12-12 10:40:44.336143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.567 qpair failed and we were unable to recover it. 00:27:10.567 [2024-12-12 10:40:44.346047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.346127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.346141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.346149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.346155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.346169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.356099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.356154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.356167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.356173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.356180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.356195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.366137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.366203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.366216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.366223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.366232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.366247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.376094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.376163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.376177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.376184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.376190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.376205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.386180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.386238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.386251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.386258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.386265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.386279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.396131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.396186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.396199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.396205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.396211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.396226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.406229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.406281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.406295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.406303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.406309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.406324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.416223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.416279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.416293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.416301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.416307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.416322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.426305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.426357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.426370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.426378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.426384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.426400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.436325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.436383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.436395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.436403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.436409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.436423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.446314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.446409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.446423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.446430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.446436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.446451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.456373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.456427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.456444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.456450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.456457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.456471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.466427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.466493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.466508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.568 [2024-12-12 10:40:44.466516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.568 [2024-12-12 10:40:44.466522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.568 [2024-12-12 10:40:44.466537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.568 qpair failed and we were unable to recover it. 00:27:10.568 [2024-12-12 10:40:44.476437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.568 [2024-12-12 10:40:44.476502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.568 [2024-12-12 10:40:44.476516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.476523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.476530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.476545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.486464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.486527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.486539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.486547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.486554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.486568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.496590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.496661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.496674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.496681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.496691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.496706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.506531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.506602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.506616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.506623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.506629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.506643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.516551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.516615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.516629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.516637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.516643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.516657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.526501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.526557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.526574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.526582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.526588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.526604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.536593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.536649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.536662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.536670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.536676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.536691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.546670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.546730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.546743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.546751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.546757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.546772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.556658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.556713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.556727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.556734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.556740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.556755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.566646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.566746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.566760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.566767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.566773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.566787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.576729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.576813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.576827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.576834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.576840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.576855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.569 [2024-12-12 10:40:44.586687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.569 [2024-12-12 10:40:44.586744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.569 [2024-12-12 10:40:44.586760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.569 [2024-12-12 10:40:44.586767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.569 [2024-12-12 10:40:44.586774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.569 [2024-12-12 10:40:44.586788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.569 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.596711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.596763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.596776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.596782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.596788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.596803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.606781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.606834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.606846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.606853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.606860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.606875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.616830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.616928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.616942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.616949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.616955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.616970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.626847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.626905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.626918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.626930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.626936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.626951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.636874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.636937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.636950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.636957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.636963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.636978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.646945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.646999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.647012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.647019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.647025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.647041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.656902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.656995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.657008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.657015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.657021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.657036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.666933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.667012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.667025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.667032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.667038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.667052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.676996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.677078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.677091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.677098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.677104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.677118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.686957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.830 [2024-12-12 10:40:44.687011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.830 [2024-12-12 10:40:44.687024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.830 [2024-12-12 10:40:44.687031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.830 [2024-12-12 10:40:44.687037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.830 [2024-12-12 10:40:44.687052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.830 qpair failed and we were unable to recover it. 00:27:10.830 [2024-12-12 10:40:44.697089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.697161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.697174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.697181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.697187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.697202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.707077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.707128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.707141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.707148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.707155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.707171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.717045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.717104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.717118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.717125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.717132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.717146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.727127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.727208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.727221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.727228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.727234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.727248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.737164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.737219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.737232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.737239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.737245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.737259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.747190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.747246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.747259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.747266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.747272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.747287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.757167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.757241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.757254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.757265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.757271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.757286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.767241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.767295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.767308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.767315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.767322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.767336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.777238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.777322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.777336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.777344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.777350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.777365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.787353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.787408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.787422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.787429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.787435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.787449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.797281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.797339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.797353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.797360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.797367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.797386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.807308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.807358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.807372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.807379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.807386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.807400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.817389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.817445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.817460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.817467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.831 [2024-12-12 10:40:44.817473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.831 [2024-12-12 10:40:44.817488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.831 qpair failed and we were unable to recover it. 00:27:10.831 [2024-12-12 10:40:44.827415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.831 [2024-12-12 10:40:44.827471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.831 [2024-12-12 10:40:44.827485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.831 [2024-12-12 10:40:44.827492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.832 [2024-12-12 10:40:44.827498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.832 [2024-12-12 10:40:44.827513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.832 qpair failed and we were unable to recover it. 00:27:10.832 [2024-12-12 10:40:44.837462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.832 [2024-12-12 10:40:44.837515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.832 [2024-12-12 10:40:44.837528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.832 [2024-12-12 10:40:44.837535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.832 [2024-12-12 10:40:44.837541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.832 [2024-12-12 10:40:44.837556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.832 qpair failed and we were unable to recover it. 00:27:10.832 [2024-12-12 10:40:44.847515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.832 [2024-12-12 10:40:44.847616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.832 [2024-12-12 10:40:44.847639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.832 [2024-12-12 10:40:44.847647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.832 [2024-12-12 10:40:44.847653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:10.832 [2024-12-12 10:40:44.847673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.832 qpair failed and we were unable to recover it. 00:27:11.092 [2024-12-12 10:40:44.857545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.092 [2024-12-12 10:40:44.857607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.092 [2024-12-12 10:40:44.857621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.092 [2024-12-12 10:40:44.857628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.092 [2024-12-12 10:40:44.857635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.092 [2024-12-12 10:40:44.857649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.092 qpair failed and we were unable to recover it. 00:27:11.092 [2024-12-12 10:40:44.867576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.092 [2024-12-12 10:40:44.867654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.092 [2024-12-12 10:40:44.867668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.092 [2024-12-12 10:40:44.867676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.092 [2024-12-12 10:40:44.867682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.092 [2024-12-12 10:40:44.867697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.092 qpair failed and we were unable to recover it. 00:27:11.092 [2024-12-12 10:40:44.877548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.092 [2024-12-12 10:40:44.877604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.092 [2024-12-12 10:40:44.877618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.092 [2024-12-12 10:40:44.877625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.092 [2024-12-12 10:40:44.877631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.092 [2024-12-12 10:40:44.877646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.092 qpair failed and we were unable to recover it. 00:27:11.092 [2024-12-12 10:40:44.887586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.092 [2024-12-12 10:40:44.887642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.092 [2024-12-12 10:40:44.887658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.092 [2024-12-12 10:40:44.887665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.092 [2024-12-12 10:40:44.887672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.092 [2024-12-12 10:40:44.887687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.092 qpair failed and we were unable to recover it. 00:27:11.092 [2024-12-12 10:40:44.897625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.092 [2024-12-12 10:40:44.897680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.092 [2024-12-12 10:40:44.897695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.092 [2024-12-12 10:40:44.897702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.092 [2024-12-12 10:40:44.897708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.092 [2024-12-12 10:40:44.897722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.092 qpair failed and we were unable to recover it. 00:27:11.092 [2024-12-12 10:40:44.907612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.092 [2024-12-12 10:40:44.907673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.092 [2024-12-12 10:40:44.907686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.092 [2024-12-12 10:40:44.907693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.092 [2024-12-12 10:40:44.907700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.092 [2024-12-12 10:40:44.907714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.092 qpair failed and we were unable to recover it. 00:27:11.092 [2024-12-12 10:40:44.917658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.092 [2024-12-12 10:40:44.917751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.092 [2024-12-12 10:40:44.917766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.092 [2024-12-12 10:40:44.917773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.092 [2024-12-12 10:40:44.917779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.092 [2024-12-12 10:40:44.917794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.092 qpair failed and we were unable to recover it. 00:27:11.092 [2024-12-12 10:40:44.927699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.092 [2024-12-12 10:40:44.927764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.092 [2024-12-12 10:40:44.927777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:44.927784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:44.927794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:44.927809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:44.937687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:44.937745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:44.937758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:44.937765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:44.937772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:44.937786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:44.947792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:44.947881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:44.947894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:44.947902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:44.947908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:44.947922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:44.957828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:44.957877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:44.957890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:44.957898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:44.957905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:44.957920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:44.967830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:44.967881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:44.967895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:44.967902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:44.967908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:44.967923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:44.977902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:44.977957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:44.977970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:44.977977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:44.977983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:44.977997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:44.987932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:44.987991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:44.988004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:44.988012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:44.988018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:44.988033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:44.997910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:44.997967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:44.997981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:44.997988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:44.997994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:44.998009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:45.007978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:45.008034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:45.008048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:45.008055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:45.008062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:45.008076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:45.018007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:45.018114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:45.018132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:45.018140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:45.018146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:45.018162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:45.028001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:45.028106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:45.028121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:45.028128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:45.028135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:45.028152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:45.038027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:45.038101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:45.038115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:45.038121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:45.038127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:45.038142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:45.048102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:45.048159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:45.048172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:45.048179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:45.048185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:45.048200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.093 [2024-12-12 10:40:45.058034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.093 [2024-12-12 10:40:45.058089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.093 [2024-12-12 10:40:45.058102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.093 [2024-12-12 10:40:45.058109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.093 [2024-12-12 10:40:45.058119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.093 [2024-12-12 10:40:45.058134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.093 qpair failed and we were unable to recover it. 00:27:11.094 [2024-12-12 10:40:45.068087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.094 [2024-12-12 10:40:45.068143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.094 [2024-12-12 10:40:45.068156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.094 [2024-12-12 10:40:45.068163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.094 [2024-12-12 10:40:45.068169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.094 [2024-12-12 10:40:45.068184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.094 qpair failed and we were unable to recover it. 00:27:11.094 [2024-12-12 10:40:45.078141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.094 [2024-12-12 10:40:45.078197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.094 [2024-12-12 10:40:45.078211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.094 [2024-12-12 10:40:45.078217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.094 [2024-12-12 10:40:45.078224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.094 [2024-12-12 10:40:45.078238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.094 qpair failed and we were unable to recover it. 00:27:11.094 [2024-12-12 10:40:45.088217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.094 [2024-12-12 10:40:45.088274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.094 [2024-12-12 10:40:45.088287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.094 [2024-12-12 10:40:45.088294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.094 [2024-12-12 10:40:45.088300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.094 [2024-12-12 10:40:45.088315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.094 qpair failed and we were unable to recover it. 00:27:11.094 [2024-12-12 10:40:45.098249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.094 [2024-12-12 10:40:45.098354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.094 [2024-12-12 10:40:45.098367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.094 [2024-12-12 10:40:45.098374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.094 [2024-12-12 10:40:45.098380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.094 [2024-12-12 10:40:45.098394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.094 qpair failed and we were unable to recover it. 00:27:11.094 [2024-12-12 10:40:45.108227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.094 [2024-12-12 10:40:45.108281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.094 [2024-12-12 10:40:45.108295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.094 [2024-12-12 10:40:45.108302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.094 [2024-12-12 10:40:45.108308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.094 [2024-12-12 10:40:45.108323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.094 qpair failed and we were unable to recover it. 00:27:11.354 [2024-12-12 10:40:45.118260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.354 [2024-12-12 10:40:45.118316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.354 [2024-12-12 10:40:45.118331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.354 [2024-12-12 10:40:45.118339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.354 [2024-12-12 10:40:45.118345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.354 [2024-12-12 10:40:45.118359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.354 qpair failed and we were unable to recover it. 00:27:11.354 [2024-12-12 10:40:45.128292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.354 [2024-12-12 10:40:45.128354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.354 [2024-12-12 10:40:45.128367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.354 [2024-12-12 10:40:45.128375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.354 [2024-12-12 10:40:45.128380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.354 [2024-12-12 10:40:45.128395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.354 qpair failed and we were unable to recover it. 00:27:11.354 [2024-12-12 10:40:45.138317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.354 [2024-12-12 10:40:45.138386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.354 [2024-12-12 10:40:45.138399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.354 [2024-12-12 10:40:45.138407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.354 [2024-12-12 10:40:45.138414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.354 [2024-12-12 10:40:45.138429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.354 qpair failed and we were unable to recover it. 00:27:11.354 [2024-12-12 10:40:45.148334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.354 [2024-12-12 10:40:45.148391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.354 [2024-12-12 10:40:45.148407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.354 [2024-12-12 10:40:45.148415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.354 [2024-12-12 10:40:45.148421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.354 [2024-12-12 10:40:45.148435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.354 qpair failed and we were unable to recover it. 00:27:11.354 [2024-12-12 10:40:45.158333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.354 [2024-12-12 10:40:45.158386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.354 [2024-12-12 10:40:45.158399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.354 [2024-12-12 10:40:45.158406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.354 [2024-12-12 10:40:45.158412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.354 [2024-12-12 10:40:45.158427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.354 qpair failed and we were unable to recover it. 00:27:11.354 [2024-12-12 10:40:45.168350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.354 [2024-12-12 10:40:45.168402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.354 [2024-12-12 10:40:45.168416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.354 [2024-12-12 10:40:45.168423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.354 [2024-12-12 10:40:45.168429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.354 [2024-12-12 10:40:45.168444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.354 qpair failed and we were unable to recover it. 00:27:11.354 [2024-12-12 10:40:45.178417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.354 [2024-12-12 10:40:45.178473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.354 [2024-12-12 10:40:45.178486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.354 [2024-12-12 10:40:45.178494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.354 [2024-12-12 10:40:45.178500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.354 [2024-12-12 10:40:45.178515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.354 qpair failed and we were unable to recover it. 00:27:11.354 [2024-12-12 10:40:45.188441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.354 [2024-12-12 10:40:45.188500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.354 [2024-12-12 10:40:45.188513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.354 [2024-12-12 10:40:45.188524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.354 [2024-12-12 10:40:45.188530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.354 [2024-12-12 10:40:45.188544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.354 qpair failed and we were unable to recover it. 00:27:11.354 [2024-12-12 10:40:45.198406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.354 [2024-12-12 10:40:45.198466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.354 [2024-12-12 10:40:45.198479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.354 [2024-12-12 10:40:45.198487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.354 [2024-12-12 10:40:45.198492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.354 [2024-12-12 10:40:45.198507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.354 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.208489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.208539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.208552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.208560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.208566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.208585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.218524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.218585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.218600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.218608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.218614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.218628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.228503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.228560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.228577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.228584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.228590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.228606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.238577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.238632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.238646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.238653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.238659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.238674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.248607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.248661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.248674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.248681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.248688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.248703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.258633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.258693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.258707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.258714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.258720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.258735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.268661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.268714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.268727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.268735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.268741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.268756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.278674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.278733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.278748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.278755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.278762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.278778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.288713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.288768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.288782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.288789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.288795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.288811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.298762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.298820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.298834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.298840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.298847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.298861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.308788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.308844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.308858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.308865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.308871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.308886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.318806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.318881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.318895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.318906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.318912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.318927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.328827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.328878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.355 [2024-12-12 10:40:45.328892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.355 [2024-12-12 10:40:45.328899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.355 [2024-12-12 10:40:45.328905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.355 [2024-12-12 10:40:45.328920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.355 qpair failed and we were unable to recover it. 00:27:11.355 [2024-12-12 10:40:45.338928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.355 [2024-12-12 10:40:45.338983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.356 [2024-12-12 10:40:45.338996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.356 [2024-12-12 10:40:45.339003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.356 [2024-12-12 10:40:45.339010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.356 [2024-12-12 10:40:45.339025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.356 qpair failed and we were unable to recover it. 00:27:11.356 [2024-12-12 10:40:45.348882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.356 [2024-12-12 10:40:45.348940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.356 [2024-12-12 10:40:45.348953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.356 [2024-12-12 10:40:45.348960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.356 [2024-12-12 10:40:45.348967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.356 [2024-12-12 10:40:45.348982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.356 qpair failed and we were unable to recover it. 00:27:11.356 [2024-12-12 10:40:45.358918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.356 [2024-12-12 10:40:45.358967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.356 [2024-12-12 10:40:45.358980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.356 [2024-12-12 10:40:45.358987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.356 [2024-12-12 10:40:45.358993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.356 [2024-12-12 10:40:45.359012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.356 qpair failed and we were unable to recover it. 00:27:11.356 [2024-12-12 10:40:45.368930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.356 [2024-12-12 10:40:45.368985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.356 [2024-12-12 10:40:45.368998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.356 [2024-12-12 10:40:45.369004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.356 [2024-12-12 10:40:45.369011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.356 [2024-12-12 10:40:45.369025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.356 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.378902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.378955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.378968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.378976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.378982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.378997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.388994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.389050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.389063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.389070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.389076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.389091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.399020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.399078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.399091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.399097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.399104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.399119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.409051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.409127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.409141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.409147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.409153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.409168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.419083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.419187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.419200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.419207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.419213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.419227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.429139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.429210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.429224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.429231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.429239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.429254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.439156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.439225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.439239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.439247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.439253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.439268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.449166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.449222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.449238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.449245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.449251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.449266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.459199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.459256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.459271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.459278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.459284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.459299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.615 [2024-12-12 10:40:45.469225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.615 [2024-12-12 10:40:45.469281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.615 [2024-12-12 10:40:45.469294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.615 [2024-12-12 10:40:45.469301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.615 [2024-12-12 10:40:45.469307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.615 [2024-12-12 10:40:45.469321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.615 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.479258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.479312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.479325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.479332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.479339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.479353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.489279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.489335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.489348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.489355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.489364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.489379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.499321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.499380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.499393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.499400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.499407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.499422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.509357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.509413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.509427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.509434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.509441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.509455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.519377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.519433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.519447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.519454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.519460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.519474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.529401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.529458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.529472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.529479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.529486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.529501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.539440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.539495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.539508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.539515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.539522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.539537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.549457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.549513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.549527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.549533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.549540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.549554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.559511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.559564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.559583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.559590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.559596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.559611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.569493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.569547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.569560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.569567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.569578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.569594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.579551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.579649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.579666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.579673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.579679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.579693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.589577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.589634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.589647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.589653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.589660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.589674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.599605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.599669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.599682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.616 [2024-12-12 10:40:45.599689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.616 [2024-12-12 10:40:45.599696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.616 [2024-12-12 10:40:45.599710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.616 qpair failed and we were unable to recover it. 00:27:11.616 [2024-12-12 10:40:45.609624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.616 [2024-12-12 10:40:45.609680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.616 [2024-12-12 10:40:45.609694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.617 [2024-12-12 10:40:45.609701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.617 [2024-12-12 10:40:45.609707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.617 [2024-12-12 10:40:45.609722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.617 qpair failed and we were unable to recover it. 00:27:11.617 [2024-12-12 10:40:45.619675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.617 [2024-12-12 10:40:45.619731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.617 [2024-12-12 10:40:45.619746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.617 [2024-12-12 10:40:45.619753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.617 [2024-12-12 10:40:45.619763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.617 [2024-12-12 10:40:45.619778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.617 qpair failed and we were unable to recover it. 00:27:11.617 [2024-12-12 10:40:45.629700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.617 [2024-12-12 10:40:45.629760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.617 [2024-12-12 10:40:45.629773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.617 [2024-12-12 10:40:45.629780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.617 [2024-12-12 10:40:45.629787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.617 [2024-12-12 10:40:45.629801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.617 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.639710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.639765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.639778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.639785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.639791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.639806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.877 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.649746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.649825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.649839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.649846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.649852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.649867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.877 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.659788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.659845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.659858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.659865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.659871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.659886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.877 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.669822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.669883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.669896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.669903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.669909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.669924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.877 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.679841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.679896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.679909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.679916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.679923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.679937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.877 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.689923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.689986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.689998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.690006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.690012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.690027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.877 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.699954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.700012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.700025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.700032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.700038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.700053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.877 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.709970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.710039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.710055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.710062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.710068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.710083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.877 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.719960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.720018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.720032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.720039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.720045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.720060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.877 qpair failed and we were unable to recover it. 00:27:11.877 [2024-12-12 10:40:45.730036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.877 [2024-12-12 10:40:45.730140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.877 [2024-12-12 10:40:45.730153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.877 [2024-12-12 10:40:45.730160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.877 [2024-12-12 10:40:45.730166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.877 [2024-12-12 10:40:45.730181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.740030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.740094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.740107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.740114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.740120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.740135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.750052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.750105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.750118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.750129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.750135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.750150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.760115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.760179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.760192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.760200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.760206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.760221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.770097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.770148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.770161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.770168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.770174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.770189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.780134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.780189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.780202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.780209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.780215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.780229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.790171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.790276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.790289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.790296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.790302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.790320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.800200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.800273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.800286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.800293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.800299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.800314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.810213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.810269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.810282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.810289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.810296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.810310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.820258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.820315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.820329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.820336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.820342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.820357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.830280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.830334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.830347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.830355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.830361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.830376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.840325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.840382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.840395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.840402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.840408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.840423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.850324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.850376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.850390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.850396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.850403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.850418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.860411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.860466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.878 [2024-12-12 10:40:45.860479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.878 [2024-12-12 10:40:45.860486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.878 [2024-12-12 10:40:45.860493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.878 [2024-12-12 10:40:45.860507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.878 qpair failed and we were unable to recover it. 00:27:11.878 [2024-12-12 10:40:45.870390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.878 [2024-12-12 10:40:45.870449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.879 [2024-12-12 10:40:45.870462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.879 [2024-12-12 10:40:45.870470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.879 [2024-12-12 10:40:45.870477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.879 [2024-12-12 10:40:45.870491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.879 qpair failed and we were unable to recover it. 00:27:11.879 [2024-12-12 10:40:45.880406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.879 [2024-12-12 10:40:45.880460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.879 [2024-12-12 10:40:45.880474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.879 [2024-12-12 10:40:45.880484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.879 [2024-12-12 10:40:45.880490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.879 [2024-12-12 10:40:45.880505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.879 qpair failed and we were unable to recover it. 00:27:11.879 [2024-12-12 10:40:45.890442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.879 [2024-12-12 10:40:45.890498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.879 [2024-12-12 10:40:45.890511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.879 [2024-12-12 10:40:45.890518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.879 [2024-12-12 10:40:45.890524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:11.879 [2024-12-12 10:40:45.890539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.879 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.900492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.900557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.900576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.900584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.900590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.900606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.910503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.910558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.910575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.910582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.910588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.910604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.920552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.920632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.920646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.920653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.920659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.920678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.930557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.930620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.930635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.930642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.930648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.930662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.940623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.940679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.940692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.940700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.940706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.940721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.950619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.950675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.950689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.950696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.950702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.950717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.960647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.960702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.960716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.960723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.960729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.960744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.970689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.970758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.970774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.970782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.970789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.970806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.980734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.980807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.980820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.980828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.980833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.980849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:45.990764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:45.990852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:45.990865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:45.990872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.139 [2024-12-12 10:40:45.990879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.139 [2024-12-12 10:40:45.990894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.139 qpair failed and we were unable to recover it. 00:27:12.139 [2024-12-12 10:40:46.000763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.139 [2024-12-12 10:40:46.000821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.139 [2024-12-12 10:40:46.000834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.139 [2024-12-12 10:40:46.000842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.000849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.000863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.010725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.010782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.010798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.010805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.010812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.010827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.020820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.020876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.020890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.020897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.020904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.020918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.030859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.030921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.030935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.030942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.030948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.030963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.040892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.040948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.040962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.040969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.040976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.040991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.050891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.050982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.050996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.051003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.051021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.051037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.060942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.061044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.061058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.061064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.061070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.061085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.070945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.071001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.071014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.071021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.071028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.071042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.080953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.081010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.081024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.081031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.081037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.081052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.090995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.091053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.091066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.091074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.091081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.091096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.101044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.101098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.101112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.101120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.101126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.101141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.110987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.111057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.111071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.140 [2024-12-12 10:40:46.111078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.140 [2024-12-12 10:40:46.111084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.140 [2024-12-12 10:40:46.111099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.140 qpair failed and we were unable to recover it. 00:27:12.140 [2024-12-12 10:40:46.121094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.140 [2024-12-12 10:40:46.121160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.140 [2024-12-12 10:40:46.121173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.141 [2024-12-12 10:40:46.121181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.141 [2024-12-12 10:40:46.121187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.141 [2024-12-12 10:40:46.121202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-12-12 10:40:46.131128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.141 [2024-12-12 10:40:46.131184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.141 [2024-12-12 10:40:46.131198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.141 [2024-12-12 10:40:46.131205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.141 [2024-12-12 10:40:46.131211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.141 [2024-12-12 10:40:46.131225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-12-12 10:40:46.141101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.141 [2024-12-12 10:40:46.141159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.141 [2024-12-12 10:40:46.141176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.141 [2024-12-12 10:40:46.141183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.141 [2024-12-12 10:40:46.141189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.141 [2024-12-12 10:40:46.141203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.141 [2024-12-12 10:40:46.151155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.141 [2024-12-12 10:40:46.151240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.141 [2024-12-12 10:40:46.151253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.141 [2024-12-12 10:40:46.151261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.141 [2024-12-12 10:40:46.151267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.141 [2024-12-12 10:40:46.151282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.141 qpair failed and we were unable to recover it. 00:27:12.401 [2024-12-12 10:40:46.161197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.401 [2024-12-12 10:40:46.161254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.401 [2024-12-12 10:40:46.161268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.401 [2024-12-12 10:40:46.161276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.401 [2024-12-12 10:40:46.161283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.401 [2024-12-12 10:40:46.161299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.401 qpair failed and we were unable to recover it. 00:27:12.401 [2024-12-12 10:40:46.171258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.401 [2024-12-12 10:40:46.171317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.401 [2024-12-12 10:40:46.171331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.401 [2024-12-12 10:40:46.171338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.401 [2024-12-12 10:40:46.171344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.401 [2024-12-12 10:40:46.171359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.401 qpair failed and we were unable to recover it. 00:27:12.401 [2024-12-12 10:40:46.181314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.401 [2024-12-12 10:40:46.181374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.401 [2024-12-12 10:40:46.181387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.401 [2024-12-12 10:40:46.181394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.401 [2024-12-12 10:40:46.181404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.401 [2024-12-12 10:40:46.181418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.401 qpair failed and we were unable to recover it. 00:27:12.401 [2024-12-12 10:40:46.191326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.401 [2024-12-12 10:40:46.191389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.401 [2024-12-12 10:40:46.191403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.401 [2024-12-12 10:40:46.191410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.401 [2024-12-12 10:40:46.191417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.401 [2024-12-12 10:40:46.191432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.401 qpair failed and we were unable to recover it. 00:27:12.401 [2024-12-12 10:40:46.201372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.401 [2024-12-12 10:40:46.201430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.401 [2024-12-12 10:40:46.201444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.201450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.201457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.201472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.211356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.211411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.211425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.211432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.211438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.211453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.221371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.221453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.221467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.221474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.221480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.221496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.231397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.231486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.231499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.231506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.231512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.231527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.241420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.241478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.241492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.241499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.241505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.241520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.251439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.251493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.251506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.251513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.251520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.251535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.261505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.261563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.261582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.261591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.261597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.261612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.271512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.271592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.271608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.271615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.271622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.271636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.281477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.281541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.281555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.281563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.281574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.281589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.291612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.291665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.291678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.291685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.291693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.291708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.301587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.301656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.301669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.301676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.301682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.301697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.311627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.311686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.311699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.311709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.311715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.311729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.321595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.402 [2024-12-12 10:40:46.321650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.402 [2024-12-12 10:40:46.321663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.402 [2024-12-12 10:40:46.321670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.402 [2024-12-12 10:40:46.321676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.402 [2024-12-12 10:40:46.321690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.402 qpair failed and we were unable to recover it. 00:27:12.402 [2024-12-12 10:40:46.331632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.331718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.331733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.331741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.331748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.331763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.403 [2024-12-12 10:40:46.341730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.341800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.341814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.341821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.341828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.341843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.403 [2024-12-12 10:40:46.351744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.351828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.351841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.351848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.351854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.351872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.403 [2024-12-12 10:40:46.361778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.361830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.361842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.361849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.361856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.361871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.403 [2024-12-12 10:40:46.371788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.371842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.371854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.371862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.371868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.371884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.403 [2024-12-12 10:40:46.381873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.381932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.381946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.381953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.381959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.381974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.403 [2024-12-12 10:40:46.391876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.391948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.391962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.391969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.391975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.391989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.403 [2024-12-12 10:40:46.401889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.401947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.401961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.401968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.401974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.401989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.403 [2024-12-12 10:40:46.411942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.411998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.412011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.412018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.412024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.412038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.403 [2024-12-12 10:40:46.421953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.403 [2024-12-12 10:40:46.422008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.403 [2024-12-12 10:40:46.422021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.403 [2024-12-12 10:40:46.422028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.403 [2024-12-12 10:40:46.422035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.403 [2024-12-12 10:40:46.422049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.403 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.431990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.432052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.432065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.432072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.432078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.432093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.442001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.442055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.442069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.442078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.442084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.442099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.451950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.452008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.452021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.452028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.452035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.452049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.462002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.462059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.462072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.462079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.462085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.462099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.472091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.472149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.472162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.472170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.472176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.472190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.482038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.482096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.482109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.482117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.482123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.482141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.492154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.492226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.492239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.492246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.492252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.492266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.502217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.502273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.502285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.502293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.502300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.502314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.512198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.512252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.512265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.512272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.512278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.512293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.522221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.522277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.522292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.522300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.522306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.664 [2024-12-12 10:40:46.522320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.664 qpair failed and we were unable to recover it. 00:27:12.664 [2024-12-12 10:40:46.532252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.664 [2024-12-12 10:40:46.532305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.664 [2024-12-12 10:40:46.532319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.664 [2024-12-12 10:40:46.532326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.664 [2024-12-12 10:40:46.532333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.532348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.542228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.542281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.542295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.542302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.542308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.542323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.552315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.552370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.552383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.552390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.552396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.552411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.562344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.562396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.562409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.562416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.562422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.562436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.572408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.572462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.572478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.572485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.572492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.572507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.582407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.582487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.582501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.582509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.582514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.582529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.592433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.592493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.592506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.592513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.592520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.592535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.602460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.602516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.602529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.602537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.602543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.602559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.612412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.612465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.612478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.612485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.612494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.612508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.622523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.622595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.622609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.622617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.622622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.622637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.632557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.632618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.632631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.632638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.632644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.632660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.642567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.642624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.642636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.642643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.642649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.642664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.652603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.652651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.652664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.652671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.652678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb830000b90 00:27:12.665 [2024-12-12 10:40:46.652692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.662645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.662738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.665 [2024-12-12 10:40:46.662787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.665 [2024-12-12 10:40:46.662810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.665 [2024-12-12 10:40:46.662829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb838000b90 00:27:12.665 [2024-12-12 10:40:46.662874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.665 qpair failed and we were unable to recover it. 00:27:12.665 [2024-12-12 10:40:46.672700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.665 [2024-12-12 10:40:46.672773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.666 [2024-12-12 10:40:46.672798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.666 [2024-12-12 10:40:46.672811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.666 [2024-12-12 10:40:46.672823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb838000b90 00:27:12.666 [2024-12-12 10:40:46.672850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.666 qpair failed and we were unable to recover it. 00:27:12.666 [2024-12-12 10:40:46.682736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.666 [2024-12-12 10:40:46.682840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.666 [2024-12-12 10:40:46.682892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.666 [2024-12-12 10:40:46.682913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.666 [2024-12-12 10:40:46.682930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c1b1a0 00:27:12.666 [2024-12-12 10:40:46.682971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.666 qpair failed and we were unable to recover it. 00:27:12.925 [2024-12-12 10:40:46.692722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.925 [2024-12-12 10:40:46.692788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.925 [2024-12-12 10:40:46.692811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.925 [2024-12-12 10:40:46.692824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.925 [2024-12-12 10:40:46.692835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c1b1a0 00:27:12.925 [2024-12-12 10:40:46.692860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.925 qpair failed and we were unable to recover it. 00:27:12.925 [2024-12-12 10:40:46.693001] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:12.925 A controller has encountered a failure and is being reset. 00:27:12.925 Controller properly reset. 00:27:12.925 Initializing NVMe Controllers 00:27:12.925 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:12.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:12.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:12.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:12.925 Initialization complete. Launching workers. 00:27:12.925 Starting thread on core 1 00:27:12.925 Starting thread on core 2 00:27:12.925 Starting thread on core 3 00:27:12.925 Starting thread on core 0 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:12.925 00:27:12.925 real 0m11.005s 00:27:12.925 user 0m19.196s 00:27:12.925 sys 0m4.782s 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.925 ************************************ 00:27:12.925 END TEST nvmf_target_disconnect_tc2 00:27:12.925 ************************************ 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:12.925 rmmod nvme_tcp 00:27:12.925 rmmod nvme_fabrics 00:27:12.925 rmmod nvme_keyring 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1675006 ']' 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1675006 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1675006 ']' 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1675006 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.925 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1675006 00:27:13.184 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:13.184 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:13.184 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1675006' 00:27:13.184 killing process with pid 1675006 00:27:13.184 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1675006 00:27:13.184 10:40:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1675006 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.184 10:40:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.719 10:40:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:15.719 00:27:15.719 real 0m19.714s 00:27:15.719 user 0m47.842s 00:27:15.719 sys 0m9.612s 00:27:15.719 10:40:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.719 10:40:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:15.719 ************************************ 00:27:15.719 END TEST nvmf_target_disconnect 00:27:15.719 ************************************ 00:27:15.719 10:40:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:15.719 00:27:15.719 real 5m50.685s 00:27:15.719 user 10m32.109s 00:27:15.719 sys 1m57.554s 00:27:15.719 10:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.719 10:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.719 ************************************ 00:27:15.719 END TEST nvmf_host 00:27:15.719 ************************************ 00:27:15.719 10:40:49 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:15.719 10:40:49 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:15.719 10:40:49 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:15.719 10:40:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:15.719 10:40:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.719 10:40:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:15.719 ************************************ 00:27:15.719 START TEST nvmf_target_core_interrupt_mode 00:27:15.719 ************************************ 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:15.719 * Looking for test storage... 00:27:15.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:15.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.719 --rc genhtml_branch_coverage=1 00:27:15.719 --rc genhtml_function_coverage=1 00:27:15.719 --rc genhtml_legend=1 00:27:15.719 --rc geninfo_all_blocks=1 00:27:15.719 --rc geninfo_unexecuted_blocks=1 00:27:15.719 00:27:15.719 ' 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:15.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.719 --rc genhtml_branch_coverage=1 00:27:15.719 --rc genhtml_function_coverage=1 00:27:15.719 --rc genhtml_legend=1 00:27:15.719 --rc geninfo_all_blocks=1 00:27:15.719 --rc geninfo_unexecuted_blocks=1 00:27:15.719 00:27:15.719 ' 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:15.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.719 --rc genhtml_branch_coverage=1 00:27:15.719 --rc genhtml_function_coverage=1 00:27:15.719 --rc genhtml_legend=1 00:27:15.719 --rc geninfo_all_blocks=1 00:27:15.719 --rc geninfo_unexecuted_blocks=1 00:27:15.719 00:27:15.719 ' 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:15.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.719 --rc genhtml_branch_coverage=1 00:27:15.719 --rc genhtml_function_coverage=1 00:27:15.719 --rc genhtml_legend=1 00:27:15.719 --rc geninfo_all_blocks=1 00:27:15.719 --rc geninfo_unexecuted_blocks=1 00:27:15.719 00:27:15.719 ' 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.719 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:15.720 ************************************ 00:27:15.720 START TEST nvmf_abort 00:27:15.720 ************************************ 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:15.720 * Looking for test storage... 00:27:15.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:27:15.720 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:15.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.980 --rc genhtml_branch_coverage=1 00:27:15.980 --rc genhtml_function_coverage=1 00:27:15.980 --rc genhtml_legend=1 00:27:15.980 --rc geninfo_all_blocks=1 00:27:15.980 --rc geninfo_unexecuted_blocks=1 00:27:15.980 00:27:15.980 ' 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:15.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.980 --rc genhtml_branch_coverage=1 00:27:15.980 --rc genhtml_function_coverage=1 00:27:15.980 --rc genhtml_legend=1 00:27:15.980 --rc geninfo_all_blocks=1 00:27:15.980 --rc geninfo_unexecuted_blocks=1 00:27:15.980 00:27:15.980 ' 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:15.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.980 --rc genhtml_branch_coverage=1 00:27:15.980 --rc genhtml_function_coverage=1 00:27:15.980 --rc genhtml_legend=1 00:27:15.980 --rc geninfo_all_blocks=1 00:27:15.980 --rc geninfo_unexecuted_blocks=1 00:27:15.980 00:27:15.980 ' 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:15.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.980 --rc genhtml_branch_coverage=1 00:27:15.980 --rc genhtml_function_coverage=1 00:27:15.980 --rc genhtml_legend=1 00:27:15.980 --rc geninfo_all_blocks=1 00:27:15.980 --rc geninfo_unexecuted_blocks=1 00:27:15.980 00:27:15.980 ' 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.980 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.981 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.551 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:22.552 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:22.552 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:22.552 Found net devices under 0000:af:00.0: cvl_0_0 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:22.552 Found net devices under 0000:af:00.1: cvl_0_1 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:22.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:27:22.552 00:27:22.552 --- 10.0.0.2 ping statistics --- 00:27:22.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.552 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:27:22.552 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:27:22.552 00:27:22.553 --- 10.0.0.1 ping statistics --- 00:27:22.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.553 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1679669 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1679669 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1679669 ']' 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 [2024-12-12 10:40:55.738294] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:22.553 [2024-12-12 10:40:55.739167] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:27:22.553 [2024-12-12 10:40:55.739200] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.553 [2024-12-12 10:40:55.817456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:22.553 [2024-12-12 10:40:55.857429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.553 [2024-12-12 10:40:55.857465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.553 [2024-12-12 10:40:55.857472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.553 [2024-12-12 10:40:55.857478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.553 [2024-12-12 10:40:55.857483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.553 [2024-12-12 10:40:55.858841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.553 [2024-12-12 10:40:55.858869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.553 [2024-12-12 10:40:55.858870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.553 [2024-12-12 10:40:55.926541] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:22.553 [2024-12-12 10:40:55.927480] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:22.553 [2024-12-12 10:40:55.927856] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:22.553 [2024-12-12 10:40:55.927957] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.553 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 [2024-12-12 10:40:55.995756] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 Malloc0 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 Delay0 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 [2024-12-12 10:40:56.087724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.553 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:22.553 [2024-12-12 10:40:56.251741] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:24.457 Initializing NVMe Controllers 00:27:24.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:24.457 controller IO queue size 128 less than required 00:27:24.457 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:24.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:24.457 Initialization complete. Launching workers. 00:27:24.457 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37864 00:27:24.457 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37921, failed to submit 66 00:27:24.457 success 37864, unsuccessful 57, failed 0 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.457 rmmod nvme_tcp 00:27:24.457 rmmod nvme_fabrics 00:27:24.457 rmmod nvme_keyring 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1679669 ']' 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1679669 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1679669 ']' 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1679669 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.457 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679669 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679669' 00:27:24.716 killing process with pid 1679669 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1679669 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1679669 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.716 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:27.252 00:27:27.252 real 0m11.174s 00:27:27.252 user 0m10.639s 00:27:27.252 sys 0m5.768s 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.252 ************************************ 00:27:27.252 END TEST nvmf_abort 00:27:27.252 ************************************ 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:27.252 ************************************ 00:27:27.252 START TEST nvmf_ns_hotplug_stress 00:27:27.252 ************************************ 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:27.252 * Looking for test storage... 00:27:27.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:27:27.252 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:27.252 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:27.252 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:27.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.253 --rc genhtml_branch_coverage=1 00:27:27.253 --rc genhtml_function_coverage=1 00:27:27.253 --rc genhtml_legend=1 00:27:27.253 --rc geninfo_all_blocks=1 00:27:27.253 --rc geninfo_unexecuted_blocks=1 00:27:27.253 00:27:27.253 ' 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:27.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.253 --rc genhtml_branch_coverage=1 00:27:27.253 --rc genhtml_function_coverage=1 00:27:27.253 --rc genhtml_legend=1 00:27:27.253 --rc geninfo_all_blocks=1 00:27:27.253 --rc geninfo_unexecuted_blocks=1 00:27:27.253 00:27:27.253 ' 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:27.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.253 --rc genhtml_branch_coverage=1 00:27:27.253 --rc genhtml_function_coverage=1 00:27:27.253 --rc genhtml_legend=1 00:27:27.253 --rc geninfo_all_blocks=1 00:27:27.253 --rc geninfo_unexecuted_blocks=1 00:27:27.253 00:27:27.253 ' 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:27.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.253 --rc genhtml_branch_coverage=1 00:27:27.253 --rc genhtml_function_coverage=1 00:27:27.253 --rc genhtml_legend=1 00:27:27.253 --rc geninfo_all_blocks=1 00:27:27.253 --rc geninfo_unexecuted_blocks=1 00:27:27.253 00:27:27.253 ' 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:27.253 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:27.254 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:33.821 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:33.821 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.821 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:33.822 Found net devices under 0000:af:00.0: cvl_0_0 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:33.822 Found net devices under 0000:af:00.1: cvl_0_1 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:27:33.822 00:27:33.822 --- 10.0.0.2 ping statistics --- 00:27:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.822 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:27:33.822 00:27:33.822 --- 10.0.0.1 ping statistics --- 00:27:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.822 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1683596 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1683596 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1683596 ']' 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.822 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:33.822 [2024-12-12 10:41:07.026700] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:33.822 [2024-12-12 10:41:07.027661] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:27:33.822 [2024-12-12 10:41:07.027700] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.822 [2024-12-12 10:41:07.104619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:33.822 [2024-12-12 10:41:07.143379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.822 [2024-12-12 10:41:07.143414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.822 [2024-12-12 10:41:07.143421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.822 [2024-12-12 10:41:07.143428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.822 [2024-12-12 10:41:07.143433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.822 [2024-12-12 10:41:07.144767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.822 [2024-12-12 10:41:07.144873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.822 [2024-12-12 10:41:07.144881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.822 [2024-12-12 10:41:07.212280] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:33.822 [2024-12-12 10:41:07.213179] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:33.823 [2024-12-12 10:41:07.213347] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:33.823 [2024-12-12 10:41:07.213501] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:33.823 [2024-12-12 10:41:07.462020] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:33.823 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.081 [2024-12-12 10:41:07.878264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.082 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:34.082 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:34.340 Malloc0 00:27:34.340 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:34.598 Delay0 00:27:34.598 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.857 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:34.857 NULL1 00:27:35.115 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:35.115 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1684040 00:27:35.115 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:35.115 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:35.115 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.492 Read completed with error (sct=0, sc=11) 00:27:36.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.492 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:36.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.751 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:36.751 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:36.751 true 00:27:36.751 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:36.751 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.688 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.998 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:37.998 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:37.998 true 00:27:37.998 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:37.998 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.275 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.534 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:38.534 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:38.534 true 00:27:38.534 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:38.534 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.911 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.912 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:39.912 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:40.170 true 00:27:40.170 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:40.170 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:41.106 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.106 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:41.106 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:41.365 true 00:27:41.365 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:41.365 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.365 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.623 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:41.623 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:41.882 true 00:27:41.882 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:41.882 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.259 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.259 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:43.259 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:43.517 true 00:27:43.517 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:43.517 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.453 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.453 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:44.453 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:44.712 true 00:27:44.712 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:44.712 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.970 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.970 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:44.970 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:45.229 true 00:27:45.229 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:45.229 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.605 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.605 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:46.605 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:46.863 true 00:27:46.863 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:46.863 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.800 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.800 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:47.800 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:48.059 true 00:27:48.059 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:48.059 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.318 10:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.318 10:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:48.318 10:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:48.576 true 00:27:48.576 10:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:48.576 10:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.512 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:49.771 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:49.771 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:50.029 true 00:27:50.029 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:50.029 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.965 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.965 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:50.965 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:51.223 true 00:27:51.223 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:51.223 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.481 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.739 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:51.739 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:51.739 true 00:27:51.739 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:51.739 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.115 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.115 10:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:53.115 10:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:53.375 true 00:27:53.375 10:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:53.375 10:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.310 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.311 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:54.311 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:54.569 true 00:27:54.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:54.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.827 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.086 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:55.086 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:55.086 true 00:27:55.086 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:55.086 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.463 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.463 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:56.463 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:56.463 true 00:27:56.722 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:56.722 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.722 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.980 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:56.980 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:57.239 true 00:27:57.239 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:57.239 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.175 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.434 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:58.434 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:58.692 true 00:27:58.692 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:58.692 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.628 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.628 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:59.628 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:59.887 true 00:27:59.887 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:27:59.887 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.146 10:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.404 10:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:00.404 10:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:00.404 true 00:28:00.404 10:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:28:00.404 10:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.779 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.779 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:01.779 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:02.038 true 00:28:02.038 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:28:02.038 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.038 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.296 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:02.296 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:02.555 true 00:28:02.555 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:28:02.555 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.931 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.931 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:03.931 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:03.931 true 00:28:04.189 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:28:04.189 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.757 10:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.015 10:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:05.015 10:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:05.274 true 00:28:05.274 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:28:05.274 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.533 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.533 Initializing NVMe Controllers 00:28:05.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.533 Controller IO queue size 128, less than required. 00:28:05.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.533 Controller IO queue size 128, less than required. 00:28:05.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:05.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:05.533 Initialization complete. Launching workers. 00:28:05.533 ======================================================== 00:28:05.533 Latency(us) 00:28:05.533 Device Information : IOPS MiB/s Average min max 00:28:05.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2005.61 0.98 43858.19 2598.41 1013245.96 00:28:05.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18367.12 8.97 6968.47 1516.48 443720.14 00:28:05.533 ======================================================== 00:28:05.533 Total : 20372.74 9.95 10600.11 1516.48 1013245.96 00:28:05.533 00:28:05.533 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:05.533 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:05.792 true 00:28:05.792 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1684040 00:28:05.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1684040) - No such process 00:28:05.792 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1684040 00:28:05.792 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.050 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:06.308 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:06.308 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:06.308 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:06.308 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:06.308 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:06.308 null0 00:28:06.566 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:06.566 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:06.566 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:06.566 null1 00:28:06.566 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:06.566 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:06.566 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:06.825 null2 00:28:06.825 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:06.825 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:06.825 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:07.083 null3 00:28:07.083 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.083 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.083 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:07.083 null4 00:28:07.342 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.342 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.342 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:07.342 null5 00:28:07.342 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.342 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.342 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:07.600 null6 00:28:07.600 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.600 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.600 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:07.859 null7 00:28:07.859 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1689252 1689253 1689255 1689257 1689259 1689261 1689262 1689264 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.860 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.119 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.119 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:08.119 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.119 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.119 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.119 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.119 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:08.119 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.119 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.378 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.637 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.897 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.897 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:08.897 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.897 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.897 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.897 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.897 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.897 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.156 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.156 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.156 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.156 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.157 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.157 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.157 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.416 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.417 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.417 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.417 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.676 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.676 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.676 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.676 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.676 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.676 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.676 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.676 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.935 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.194 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.195 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.454 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.712 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.713 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.971 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.971 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.971 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.971 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.971 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.971 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:10.971 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.971 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.231 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.231 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.231 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.231 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.231 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.231 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.231 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.232 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.492 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.751 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.751 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.751 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.751 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.751 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.751 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.751 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.751 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.011 rmmod nvme_tcp 00:28:12.011 rmmod nvme_fabrics 00:28:12.011 rmmod nvme_keyring 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1683596 ']' 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1683596 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1683596 ']' 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1683596 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.011 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1683596 00:28:12.011 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:12.011 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:12.011 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1683596' 00:28:12.011 killing process with pid 1683596 00:28:12.011 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1683596 00:28:12.011 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1683596 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.271 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:14.808 00:28:14.808 real 0m47.394s 00:28:14.808 user 2m56.817s 00:28:14.808 sys 0m19.320s 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.808 ************************************ 00:28:14.808 END TEST nvmf_ns_hotplug_stress 00:28:14.808 ************************************ 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.808 ************************************ 00:28:14.808 START TEST nvmf_delete_subsystem 00:28:14.808 ************************************ 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:14.808 * Looking for test storage... 00:28:14.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:14.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.808 --rc genhtml_branch_coverage=1 00:28:14.808 --rc genhtml_function_coverage=1 00:28:14.808 --rc genhtml_legend=1 00:28:14.808 --rc geninfo_all_blocks=1 00:28:14.808 --rc geninfo_unexecuted_blocks=1 00:28:14.808 00:28:14.808 ' 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:14.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.808 --rc genhtml_branch_coverage=1 00:28:14.808 --rc genhtml_function_coverage=1 00:28:14.808 --rc genhtml_legend=1 00:28:14.808 --rc geninfo_all_blocks=1 00:28:14.808 --rc geninfo_unexecuted_blocks=1 00:28:14.808 00:28:14.808 ' 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:14.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.808 --rc genhtml_branch_coverage=1 00:28:14.808 --rc genhtml_function_coverage=1 00:28:14.808 --rc genhtml_legend=1 00:28:14.808 --rc geninfo_all_blocks=1 00:28:14.808 --rc geninfo_unexecuted_blocks=1 00:28:14.808 00:28:14.808 ' 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:14.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.808 --rc genhtml_branch_coverage=1 00:28:14.808 --rc genhtml_function_coverage=1 00:28:14.808 --rc genhtml_legend=1 00:28:14.808 --rc geninfo_all_blocks=1 00:28:14.808 --rc geninfo_unexecuted_blocks=1 00:28:14.808 00:28:14.808 ' 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.808 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:14.809 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:20.083 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.083 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:20.084 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:20.084 Found net devices under 0000:af:00.0: cvl_0_0 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:20.084 Found net devices under 0000:af:00.1: cvl_0_1 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.084 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:28:20.343 00:28:20.343 --- 10.0.0.2 ping statistics --- 00:28:20.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.343 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:28:20.343 00:28:20.343 --- 10.0.0.1 ping statistics --- 00:28:20.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.343 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:20.343 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1693548 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1693548 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1693548 ']' 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.602 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.602 [2024-12-12 10:41:54.439123] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:20.602 [2024-12-12 10:41:54.440256] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:28:20.602 [2024-12-12 10:41:54.440291] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.602 [2024-12-12 10:41:54.519827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:20.602 [2024-12-12 10:41:54.560456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.602 [2024-12-12 10:41:54.560491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.602 [2024-12-12 10:41:54.560498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.602 [2024-12-12 10:41:54.560504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.602 [2024-12-12 10:41:54.560512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.602 [2024-12-12 10:41:54.561593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.602 [2024-12-12 10:41:54.561594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.862 [2024-12-12 10:41:54.630098] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:20.862 [2024-12-12 10:41:54.630650] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:20.862 [2024-12-12 10:41:54.630855] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.862 [2024-12-12 10:41:54.698369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.862 [2024-12-12 10:41:54.726722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.862 NULL1 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.862 Delay0 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1693571 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:20.862 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:20.862 [2024-12-12 10:41:54.844397] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:22.764 10:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:22.764 10:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.764 10:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 starting I/O failed: -6 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 [2024-12-12 10:41:56.979296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e92c0 is same with the state(6) to be set 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Write completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.023 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 Write completed with error (sct=0, sc=8) 00:28:23.024 Read completed with error (sct=0, sc=8) 00:28:23.024 starting I/O failed: -6 00:28:23.024 starting I/O failed: -6 00:28:23.024 starting I/O failed: -6 00:28:23.024 starting I/O failed: -6 00:28:23.959 [2024-12-12 10:41:57.939982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea9b0 is same with the state(6) to be set 00:28:24.218 Write completed with error (sct=0, sc=8) 00:28:24.218 Write completed with error (sct=0, sc=8) 00:28:24.218 Read completed with error (sct=0, sc=8) 00:28:24.218 Read completed with error (sct=0, sc=8) 00:28:24.218 Read completed with error (sct=0, sc=8) 00:28:24.218 Write completed with error (sct=0, sc=8) 00:28:24.218 Read completed with error (sct=0, sc=8) 00:28:24.218 Read completed with error (sct=0, sc=8) 00:28:24.218 Read completed with error (sct=0, sc=8) 00:28:24.218 Read completed with error (sct=0, sc=8) 00:28:24.218 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 [2024-12-12 10:41:57.983243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f93e000d060 is same with the state(6) to be set 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 [2024-12-12 10:41:57.983353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9960 is same with the state(6) to be set 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 [2024-12-12 10:41:57.983522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f93e000d800 is same with the state(6) to be set 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Read completed with error (sct=0, sc=8) 00:28:24.219 Write completed with error (sct=0, sc=8) 00:28:24.219 [2024-12-12 10:41:57.984251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f93e0000c80 is same with the state(6) to be set 00:28:24.219 Initializing NVMe Controllers 00:28:24.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.219 Controller IO queue size 128, less than required. 00:28:24.219 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:24.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:24.219 Initialization complete. Launching workers. 00:28:24.219 ======================================================== 00:28:24.219 Latency(us) 00:28:24.219 Device Information : IOPS MiB/s Average min max 00:28:24.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.44 0.08 877617.33 251.61 1042665.11 00:28:24.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.31 0.09 1056251.76 356.40 2001412.06 00:28:24.219 ======================================================== 00:28:24.219 Total : 329.74 0.16 972046.08 251.61 2001412.06 00:28:24.219 00:28:24.219 [2024-12-12 10:41:57.984876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ea9b0 (9): Bad file descriptor 00:28:24.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:24.219 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.219 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:24.219 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1693571 00:28:24.219 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1693571 00:28:24.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1693571) - No such process 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1693571 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1693571 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1693571 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.478 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.737 [2024-12-12 10:41:58.518684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1694209 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1694209 00:28:24.737 10:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:24.737 [2024-12-12 10:41:58.610415] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:25.304 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:25.304 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1694209 00:28:25.304 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:25.563 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:25.563 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1694209 00:28:25.563 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:26.130 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:26.130 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1694209 00:28:26.130 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:26.697 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:26.697 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1694209 00:28:26.697 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.264 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.264 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1694209 00:28:27.264 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.830 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.830 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1694209 00:28:27.830 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.830 Initializing NVMe Controllers 00:28:27.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.830 Controller IO queue size 128, less than required. 00:28:27.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:27.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:27.830 Initialization complete. Launching workers. 00:28:27.830 ======================================================== 00:28:27.830 Latency(us) 00:28:27.830 Device Information : IOPS MiB/s Average min max 00:28:27.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002388.61 1000129.70 1041427.18 00:28:27.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004148.80 1000310.81 1011062.66 00:28:27.830 ======================================================== 00:28:27.830 Total : 256.00 0.12 1003268.71 1000129.70 1041427.18 00:28:27.830 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1694209 00:28:28.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1694209) - No such process 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1694209 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:28.087 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:28.087 rmmod nvme_tcp 00:28:28.087 rmmod nvme_fabrics 00:28:28.345 rmmod nvme_keyring 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1693548 ']' 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1693548 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1693548 ']' 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1693548 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1693548 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1693548' 00:28:28.345 killing process with pid 1693548 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1693548 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1693548 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:28.345 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:28.636 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:28.636 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:28.636 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:28.636 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:28.636 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:28.636 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.636 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.636 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.604 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:30.604 00:28:30.604 real 0m16.125s 00:28:30.604 user 0m26.221s 00:28:30.604 sys 0m5.961s 00:28:30.604 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.604 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 ************************************ 00:28:30.604 END TEST nvmf_delete_subsystem 00:28:30.604 ************************************ 00:28:30.604 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:30.604 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:30.604 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.604 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 ************************************ 00:28:30.604 START TEST nvmf_host_management 00:28:30.604 ************************************ 00:28:30.604 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:30.604 * Looking for test storage... 00:28:30.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:30.605 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:30.605 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:28:30.605 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:30.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.865 --rc genhtml_branch_coverage=1 00:28:30.865 --rc genhtml_function_coverage=1 00:28:30.865 --rc genhtml_legend=1 00:28:30.865 --rc geninfo_all_blocks=1 00:28:30.865 --rc geninfo_unexecuted_blocks=1 00:28:30.865 00:28:30.865 ' 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:30.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.865 --rc genhtml_branch_coverage=1 00:28:30.865 --rc genhtml_function_coverage=1 00:28:30.865 --rc genhtml_legend=1 00:28:30.865 --rc geninfo_all_blocks=1 00:28:30.865 --rc geninfo_unexecuted_blocks=1 00:28:30.865 00:28:30.865 ' 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:30.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.865 --rc genhtml_branch_coverage=1 00:28:30.865 --rc genhtml_function_coverage=1 00:28:30.865 --rc genhtml_legend=1 00:28:30.865 --rc geninfo_all_blocks=1 00:28:30.865 --rc geninfo_unexecuted_blocks=1 00:28:30.865 00:28:30.865 ' 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:30.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.865 --rc genhtml_branch_coverage=1 00:28:30.865 --rc genhtml_function_coverage=1 00:28:30.865 --rc genhtml_legend=1 00:28:30.865 --rc geninfo_all_blocks=1 00:28:30.865 --rc geninfo_unexecuted_blocks=1 00:28:30.865 00:28:30.865 ' 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.865 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:30.866 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:37.435 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:37.435 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:37.435 Found net devices under 0000:af:00.0: cvl_0_0 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.435 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:37.436 Found net devices under 0000:af:00.1: cvl_0_1 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:37.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:28:37.436 00:28:37.436 --- 10.0.0.2 ping statistics --- 00:28:37.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.436 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:28:37.436 00:28:37.436 --- 10.0.0.1 ping statistics --- 00:28:37.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.436 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1698605 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1698605 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1698605 ']' 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.436 [2024-12-12 10:42:10.657398] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:37.436 [2024-12-12 10:42:10.658322] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:28:37.436 [2024-12-12 10:42:10.658355] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.436 [2024-12-12 10:42:10.738440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.436 [2024-12-12 10:42:10.779809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.436 [2024-12-12 10:42:10.779846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.436 [2024-12-12 10:42:10.779853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.436 [2024-12-12 10:42:10.779859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.436 [2024-12-12 10:42:10.779865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.436 [2024-12-12 10:42:10.781332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.436 [2024-12-12 10:42:10.781438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.436 [2024-12-12 10:42:10.781549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.436 [2024-12-12 10:42:10.781551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:37.436 [2024-12-12 10:42:10.849531] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:37.436 [2024-12-12 10:42:10.850372] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:37.436 [2024-12-12 10:42:10.850606] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:37.436 [2024-12-12 10:42:10.851024] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:37.436 [2024-12-12 10:42:10.851063] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.436 [2024-12-12 10:42:10.914313] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.436 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.437 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:37.437 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:37.437 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:37.437 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.437 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.437 Malloc0 00:28:37.437 [2024-12-12 10:42:10.998551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1698724 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1698724 /var/tmp/bdevperf.sock 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1698724 ']' 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:37.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.437 { 00:28:37.437 "params": { 00:28:37.437 "name": "Nvme$subsystem", 00:28:37.437 "trtype": "$TEST_TRANSPORT", 00:28:37.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.437 "adrfam": "ipv4", 00:28:37.437 "trsvcid": "$NVMF_PORT", 00:28:37.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.437 "hdgst": ${hdgst:-false}, 00:28:37.437 "ddgst": ${ddgst:-false} 00:28:37.437 }, 00:28:37.437 "method": "bdev_nvme_attach_controller" 00:28:37.437 } 00:28:37.437 EOF 00:28:37.437 )") 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:37.437 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:37.437 "params": { 00:28:37.437 "name": "Nvme0", 00:28:37.437 "trtype": "tcp", 00:28:37.437 "traddr": "10.0.0.2", 00:28:37.437 "adrfam": "ipv4", 00:28:37.437 "trsvcid": "4420", 00:28:37.437 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:37.437 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:37.437 "hdgst": false, 00:28:37.437 "ddgst": false 00:28:37.437 }, 00:28:37.437 "method": "bdev_nvme_attach_controller" 00:28:37.437 }' 00:28:37.437 [2024-12-12 10:42:11.094690] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:28:37.437 [2024-12-12 10:42:11.094740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698724 ] 00:28:37.437 [2024-12-12 10:42:11.168977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.437 [2024-12-12 10:42:11.209713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.696 Running I/O for 10 seconds... 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:28:37.696 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:37.957 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:37.957 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:37.957 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:37.957 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.957 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.957 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.957 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.957 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:28:37.958 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:28:37.958 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:37.958 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:37.958 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:37.958 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:37.958 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.958 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.958 [2024-12-12 10:42:11.970076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.958 [2024-12-12 10:42:11.970491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e730 is same with the state(6) to be set 00:28:37.959 [2024-12-12 10:42:11.970687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.970985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.970994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.959 [2024-12-12 10:42:11.971145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.959 [2024-12-12 10:42:11.971151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.960 [2024-12-12 10:42:11.971584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.960 [2024-12-12 10:42:11.971592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.961 [2024-12-12 10:42:11.971599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.961 [2024-12-12 10:42:11.971606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.961 [2024-12-12 10:42:11.971613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.961 [2024-12-12 10:42:11.971621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.961 [2024-12-12 10:42:11.971628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.961 [2024-12-12 10:42:11.971636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.961 [2024-12-12 10:42:11.971642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.961 [2024-12-12 10:42:11.971654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.961 [2024-12-12 10:42:11.971660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.961 [2024-12-12 10:42:11.971668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.961 [2024-12-12 10:42:11.971675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.961 [2024-12-12 10:42:11.971682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2a770 is same with the state(6) to be set 00:28:37.961 [2024-12-12 10:42:11.972654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:37.961 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:37.961 00:28:37.961 Latency(us) 00:28:37.961 [2024-12-12T09:42:11.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.961 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.961 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:37.961 Verification LBA range: start 0x0 length 0x400 00:28:37.961 Nvme0n1 : 0.40 1897.61 118.60 158.13 0.00 30309.86 6147.90 27088.21 00:28:37.961 [2024-12-12T09:42:11.984Z] =================================================================================================================== 00:28:37.961 [2024-12-12T09:42:11.984Z] Total : 1897.61 118.60 158.13 0.00 30309.86 6147.90 27088.21 00:28:37.961 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.961 [2024-12-12 10:42:11.975061] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:37.961 [2024-12-12 10:42:11.975083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa117e0 (9): Bad file descriptor 00:28:37.961 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:37.961 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.961 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.961 [2024-12-12 10:42:11.975998] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:37.961 [2024-12-12 10:42:11.976074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:37.961 [2024-12-12 10:42:11.976095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.961 [2024-12-12 10:42:11.976111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:37.961 [2024-12-12 10:42:11.976118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:37.961 [2024-12-12 10:42:11.976124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.961 [2024-12-12 10:42:11.976131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa117e0 00:28:37.961 [2024-12-12 10:42:11.976149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa117e0 (9): Bad file descriptor 00:28:37.961 [2024-12-12 10:42:11.976160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:37.961 [2024-12-12 10:42:11.976166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:37.961 [2024-12-12 10:42:11.976174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:37.961 [2024-12-12 10:42:11.976182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:38.220 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.220 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1698724 00:28:39.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1698724) - No such process 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:39.157 { 00:28:39.157 "params": { 00:28:39.157 "name": "Nvme$subsystem", 00:28:39.157 "trtype": "$TEST_TRANSPORT", 00:28:39.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.157 "adrfam": "ipv4", 00:28:39.157 "trsvcid": "$NVMF_PORT", 00:28:39.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.157 "hdgst": ${hdgst:-false}, 00:28:39.157 "ddgst": ${ddgst:-false} 00:28:39.157 }, 00:28:39.157 "method": "bdev_nvme_attach_controller" 00:28:39.157 } 00:28:39.157 EOF 00:28:39.157 )") 00:28:39.157 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:39.157 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:39.157 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:39.157 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:39.157 "params": { 00:28:39.157 "name": "Nvme0", 00:28:39.157 "trtype": "tcp", 00:28:39.157 "traddr": "10.0.0.2", 00:28:39.157 "adrfam": "ipv4", 00:28:39.157 "trsvcid": "4420", 00:28:39.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:39.157 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:39.157 "hdgst": false, 00:28:39.157 "ddgst": false 00:28:39.157 }, 00:28:39.157 "method": "bdev_nvme_attach_controller" 00:28:39.157 }' 00:28:39.157 [2024-12-12 10:42:13.039322] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:28:39.157 [2024-12-12 10:42:13.039369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699182 ] 00:28:39.157 [2024-12-12 10:42:13.115559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.157 [2024-12-12 10:42:13.156513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.726 Running I/O for 1 seconds... 00:28:40.663 1984.00 IOPS, 124.00 MiB/s 00:28:40.663 Latency(us) 00:28:40.663 [2024-12-12T09:42:14.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.663 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.663 Verification LBA range: start 0x0 length 0x400 00:28:40.663 Nvme0n1 : 1.00 2038.43 127.40 0.00 0.00 30907.88 6709.64 27088.21 00:28:40.663 [2024-12-12T09:42:14.686Z] =================================================================================================================== 00:28:40.663 [2024-12-12T09:42:14.686Z] Total : 2038.43 127.40 0.00 0.00 30907.88 6709.64 27088.21 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.663 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.663 rmmod nvme_tcp 00:28:40.663 rmmod nvme_fabrics 00:28:40.663 rmmod nvme_keyring 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1698605 ']' 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1698605 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1698605 ']' 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1698605 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1698605 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1698605' 00:28:40.923 killing process with pid 1698605 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1698605 00:28:40.923 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1698605 00:28:40.923 [2024-12-12 10:42:14.924346] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:41.182 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.182 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.182 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.182 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:41.182 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:41.182 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.182 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.182 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.183 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.183 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.183 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.183 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.088 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.088 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:43.088 00:28:43.088 real 0m12.512s 00:28:43.088 user 0m18.979s 00:28:43.088 sys 0m6.263s 00:28:43.088 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.088 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:43.088 ************************************ 00:28:43.088 END TEST nvmf_host_management 00:28:43.088 ************************************ 00:28:43.088 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:43.088 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:43.088 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.088 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:43.088 ************************************ 00:28:43.088 START TEST nvmf_lvol 00:28:43.088 ************************************ 00:28:43.088 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:43.347 * Looking for test storage... 00:28:43.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:43.347 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.348 --rc genhtml_branch_coverage=1 00:28:43.348 --rc genhtml_function_coverage=1 00:28:43.348 --rc genhtml_legend=1 00:28:43.348 --rc geninfo_all_blocks=1 00:28:43.348 --rc geninfo_unexecuted_blocks=1 00:28:43.348 00:28:43.348 ' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.348 --rc genhtml_branch_coverage=1 00:28:43.348 --rc genhtml_function_coverage=1 00:28:43.348 --rc genhtml_legend=1 00:28:43.348 --rc geninfo_all_blocks=1 00:28:43.348 --rc geninfo_unexecuted_blocks=1 00:28:43.348 00:28:43.348 ' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.348 --rc genhtml_branch_coverage=1 00:28:43.348 --rc genhtml_function_coverage=1 00:28:43.348 --rc genhtml_legend=1 00:28:43.348 --rc geninfo_all_blocks=1 00:28:43.348 --rc geninfo_unexecuted_blocks=1 00:28:43.348 00:28:43.348 ' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.348 --rc genhtml_branch_coverage=1 00:28:43.348 --rc genhtml_function_coverage=1 00:28:43.348 --rc genhtml_legend=1 00:28:43.348 --rc geninfo_all_blocks=1 00:28:43.348 --rc geninfo_unexecuted_blocks=1 00:28:43.348 00:28:43.348 ' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.348 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:49.921 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:49.921 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:49.921 Found net devices under 0000:af:00.0: cvl_0_0 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.921 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:49.922 Found net devices under 0000:af:00.1: cvl_0_1 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.922 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:49.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:28:49.922 00:28:49.922 --- 10.0.0.2 ping statistics --- 00:28:49.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.922 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:28:49.922 00:28:49.922 --- 10.0.0.1 ping statistics --- 00:28:49.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.922 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1702875 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1702875 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1702875 ']' 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.922 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:49.922 [2024-12-12 10:42:23.236271] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:49.922 [2024-12-12 10:42:23.237248] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:28:49.922 [2024-12-12 10:42:23.237286] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.922 [2024-12-12 10:42:23.315546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:49.922 [2024-12-12 10:42:23.357616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.922 [2024-12-12 10:42:23.357652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.922 [2024-12-12 10:42:23.357659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.922 [2024-12-12 10:42:23.357665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.922 [2024-12-12 10:42:23.357669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.922 [2024-12-12 10:42:23.358937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.922 [2024-12-12 10:42:23.359046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.922 [2024-12-12 10:42:23.359048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.922 [2024-12-12 10:42:23.427541] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:49.922 [2024-12-12 10:42:23.428454] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:49.922 [2024-12-12 10:42:23.428495] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:49.922 [2024-12-12 10:42:23.428715] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:50.181 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.181 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:50.181 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.181 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.181 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:50.181 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.181 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:50.440 [2024-12-12 10:42:24.287788] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.440 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:50.699 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:50.699 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:50.958 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:50.958 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:50.958 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:51.219 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=39522afe-be78-4855-ae8a-2b14f3558957 00:28:51.219 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 39522afe-be78-4855-ae8a-2b14f3558957 lvol 20 00:28:51.478 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=527006c9-ca30-4d73-a789-0d4d8686f140 00:28:51.478 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:51.737 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 527006c9-ca30-4d73-a789-0d4d8686f140 00:28:51.737 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:51.995 [2024-12-12 10:42:25.903734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.995 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:52.254 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:52.254 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1703360 00:28:52.254 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:53.191 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 527006c9-ca30-4d73-a789-0d4d8686f140 MY_SNAPSHOT 00:28:53.450 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0ed4f918-389f-43a5-b443-3839be5701c1 00:28:53.450 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 527006c9-ca30-4d73-a789-0d4d8686f140 30 00:28:53.708 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0ed4f918-389f-43a5-b443-3839be5701c1 MY_CLONE 00:28:53.967 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d64f655c-b3da-4f07-af29-b9432768d115 00:28:53.967 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d64f655c-b3da-4f07-af29-b9432768d115 00:28:54.535 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1703360 00:29:02.654 Initializing NVMe Controllers 00:29:02.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:02.654 Controller IO queue size 128, less than required. 00:29:02.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:02.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:02.654 Initialization complete. Launching workers. 00:29:02.654 ======================================================== 00:29:02.654 Latency(us) 00:29:02.654 Device Information : IOPS MiB/s Average min max 00:29:02.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12231.40 47.78 10466.18 4282.72 60459.02 00:29:02.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12358.10 48.27 10357.61 3486.63 55670.19 00:29:02.654 ======================================================== 00:29:02.654 Total : 24589.50 96.05 10411.62 3486.63 60459.02 00:29:02.654 00:29:02.654 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:02.913 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 527006c9-ca30-4d73-a789-0d4d8686f140 00:29:03.172 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39522afe-be78-4855-ae8a-2b14f3558957 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:03.431 rmmod nvme_tcp 00:29:03.431 rmmod nvme_fabrics 00:29:03.431 rmmod nvme_keyring 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1702875 ']' 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1702875 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1702875 ']' 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1702875 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1702875 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1702875' 00:29:03.431 killing process with pid 1702875 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1702875 00:29:03.431 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1702875 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.690 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.596 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.855 00:29:05.855 real 0m22.531s 00:29:05.855 user 0m56.153s 00:29:05.855 sys 0m9.812s 00:29:05.855 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.855 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:05.855 ************************************ 00:29:05.855 END TEST nvmf_lvol 00:29:05.856 ************************************ 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:05.856 ************************************ 00:29:05.856 START TEST nvmf_lvs_grow 00:29:05.856 ************************************ 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:05.856 * Looking for test storage... 00:29:05.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:05.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.856 --rc genhtml_branch_coverage=1 00:29:05.856 --rc genhtml_function_coverage=1 00:29:05.856 --rc genhtml_legend=1 00:29:05.856 --rc geninfo_all_blocks=1 00:29:05.856 --rc geninfo_unexecuted_blocks=1 00:29:05.856 00:29:05.856 ' 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:05.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.856 --rc genhtml_branch_coverage=1 00:29:05.856 --rc genhtml_function_coverage=1 00:29:05.856 --rc genhtml_legend=1 00:29:05.856 --rc geninfo_all_blocks=1 00:29:05.856 --rc geninfo_unexecuted_blocks=1 00:29:05.856 00:29:05.856 ' 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:05.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.856 --rc genhtml_branch_coverage=1 00:29:05.856 --rc genhtml_function_coverage=1 00:29:05.856 --rc genhtml_legend=1 00:29:05.856 --rc geninfo_all_blocks=1 00:29:05.856 --rc geninfo_unexecuted_blocks=1 00:29:05.856 00:29:05.856 ' 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:05.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.856 --rc genhtml_branch_coverage=1 00:29:05.856 --rc genhtml_function_coverage=1 00:29:05.856 --rc genhtml_legend=1 00:29:05.856 --rc geninfo_all_blocks=1 00:29:05.856 --rc geninfo_unexecuted_blocks=1 00:29:05.856 00:29:05.856 ' 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.856 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:06.116 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.685 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:12.686 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:12.686 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:12.686 Found net devices under 0000:af:00.0: cvl_0_0 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:12.686 Found net devices under 0000:af:00.1: cvl_0_1 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:29:12.686 00:29:12.686 --- 10.0.0.2 ping statistics --- 00:29:12.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.686 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:29:12.686 00:29:12.686 --- 10.0.0.1 ping statistics --- 00:29:12.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.686 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1708602 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1708602 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1708602 ']' 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.686 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.686 [2024-12-12 10:42:45.934801] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:12.686 [2024-12-12 10:42:45.935713] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:29:12.686 [2024-12-12 10:42:45.935745] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.686 [2024-12-12 10:42:45.996827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.686 [2024-12-12 10:42:46.037583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.687 [2024-12-12 10:42:46.037617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.687 [2024-12-12 10:42:46.037624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.687 [2024-12-12 10:42:46.037630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.687 [2024-12-12 10:42:46.037636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.687 [2024-12-12 10:42:46.038132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.687 [2024-12-12 10:42:46.105397] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:12.687 [2024-12-12 10:42:46.105614] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:12.687 [2024-12-12 10:42:46.334789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:12.687 ************************************ 00:29:12.687 START TEST lvs_grow_clean 00:29:12.687 ************************************ 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:12.687 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:12.946 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:12.946 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:12.946 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:13.205 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:13.205 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:13.205 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 lvol 150 00:29:13.205 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e6dd6ab9-71d8-4eb6-9e08-0e564b497080 00:29:13.205 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:13.205 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:13.464 [2024-12-12 10:42:47.366515] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:13.464 [2024-12-12 10:42:47.366667] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:13.464 true 00:29:13.464 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:13.464 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:13.723 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:13.723 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:13.723 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e6dd6ab9-71d8-4eb6-9e08-0e564b497080 00:29:13.982 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:14.240 [2024-12-12 10:42:48.107024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.240 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1709035 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1709035 /var/tmp/bdevperf.sock 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1709035 ']' 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:14.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.499 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.499 [2024-12-12 10:42:48.354427] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:29:14.499 [2024-12-12 10:42:48.354474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709035 ] 00:29:14.499 [2024-12-12 10:42:48.428949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.499 [2024-12-12 10:42:48.469760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.758 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.758 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:14.758 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:15.017 Nvme0n1 00:29:15.017 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:15.017 [ 00:29:15.017 { 00:29:15.017 "name": "Nvme0n1", 00:29:15.017 "aliases": [ 00:29:15.017 "e6dd6ab9-71d8-4eb6-9e08-0e564b497080" 00:29:15.017 ], 00:29:15.017 "product_name": "NVMe disk", 00:29:15.017 "block_size": 4096, 00:29:15.017 "num_blocks": 38912, 00:29:15.017 "uuid": "e6dd6ab9-71d8-4eb6-9e08-0e564b497080", 00:29:15.017 "numa_id": 1, 00:29:15.017 "assigned_rate_limits": { 00:29:15.017 "rw_ios_per_sec": 0, 00:29:15.017 "rw_mbytes_per_sec": 0, 00:29:15.017 "r_mbytes_per_sec": 0, 00:29:15.017 "w_mbytes_per_sec": 0 00:29:15.017 }, 00:29:15.017 "claimed": false, 00:29:15.017 "zoned": false, 00:29:15.017 "supported_io_types": { 00:29:15.017 "read": true, 00:29:15.017 "write": true, 00:29:15.017 "unmap": true, 00:29:15.017 "flush": true, 00:29:15.017 "reset": true, 00:29:15.017 "nvme_admin": true, 00:29:15.017 "nvme_io": true, 00:29:15.017 "nvme_io_md": false, 00:29:15.017 "write_zeroes": true, 00:29:15.017 "zcopy": false, 00:29:15.017 "get_zone_info": false, 00:29:15.017 "zone_management": false, 00:29:15.017 "zone_append": false, 00:29:15.017 "compare": true, 00:29:15.017 "compare_and_write": true, 00:29:15.017 "abort": true, 00:29:15.017 "seek_hole": false, 00:29:15.017 "seek_data": false, 00:29:15.017 "copy": true, 00:29:15.017 "nvme_iov_md": false 00:29:15.017 }, 00:29:15.017 "memory_domains": [ 00:29:15.017 { 00:29:15.017 "dma_device_id": "system", 00:29:15.017 "dma_device_type": 1 00:29:15.017 } 00:29:15.017 ], 00:29:15.017 "driver_specific": { 00:29:15.017 "nvme": [ 00:29:15.017 { 00:29:15.017 "trid": { 00:29:15.017 "trtype": "TCP", 00:29:15.017 "adrfam": "IPv4", 00:29:15.017 "traddr": "10.0.0.2", 00:29:15.017 "trsvcid": "4420", 00:29:15.017 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:15.017 }, 00:29:15.017 "ctrlr_data": { 00:29:15.017 "cntlid": 1, 00:29:15.017 "vendor_id": "0x8086", 00:29:15.017 "model_number": "SPDK bdev Controller", 00:29:15.017 "serial_number": "SPDK0", 00:29:15.017 "firmware_revision": "25.01", 00:29:15.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.017 "oacs": { 00:29:15.017 "security": 0, 00:29:15.017 "format": 0, 00:29:15.017 "firmware": 0, 00:29:15.017 "ns_manage": 0 00:29:15.017 }, 00:29:15.017 "multi_ctrlr": true, 00:29:15.017 "ana_reporting": false 00:29:15.017 }, 00:29:15.017 "vs": { 00:29:15.017 "nvme_version": "1.3" 00:29:15.017 }, 00:29:15.017 "ns_data": { 00:29:15.017 "id": 1, 00:29:15.017 "can_share": true 00:29:15.017 } 00:29:15.017 } 00:29:15.017 ], 00:29:15.017 "mp_policy": "active_passive" 00:29:15.017 } 00:29:15.017 } 00:29:15.017 ] 00:29:15.017 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1709095 00:29:15.017 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:15.017 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:15.276 Running I/O for 10 seconds... 00:29:16.213 Latency(us) 00:29:16.213 [2024-12-12T09:42:50.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.213 Nvme0n1 : 1.00 22924.00 89.55 0.00 0.00 0.00 0.00 0.00 00:29:16.213 [2024-12-12T09:42:50.236Z] =================================================================================================================== 00:29:16.213 [2024-12-12T09:42:50.236Z] Total : 22924.00 89.55 0.00 0.00 0.00 0.00 0.00 00:29:16.213 00:29:17.151 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:17.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.151 Nvme0n1 : 2.00 23092.00 90.20 0.00 0.00 0.00 0.00 0.00 00:29:17.151 [2024-12-12T09:42:51.174Z] =================================================================================================================== 00:29:17.151 [2024-12-12T09:42:51.174Z] Total : 23092.00 90.20 0.00 0.00 0.00 0.00 0.00 00:29:17.151 00:29:17.151 true 00:29:17.411 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:17.411 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:17.411 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:17.411 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:17.411 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1709095 00:29:18.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.347 Nvme0n1 : 3.00 23274.33 90.92 0.00 0.00 0.00 0.00 0.00 00:29:18.347 [2024-12-12T09:42:52.370Z] =================================================================================================================== 00:29:18.347 [2024-12-12T09:42:52.370Z] Total : 23274.33 90.92 0.00 0.00 0.00 0.00 0.00 00:29:18.347 00:29:19.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.283 Nvme0n1 : 4.00 23424.75 91.50 0.00 0.00 0.00 0.00 0.00 00:29:19.283 [2024-12-12T09:42:53.306Z] =================================================================================================================== 00:29:19.283 [2024-12-12T09:42:53.306Z] Total : 23424.75 91.50 0.00 0.00 0.00 0.00 0.00 00:29:19.283 00:29:20.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.219 Nvme0n1 : 5.00 23515.00 91.86 0.00 0.00 0.00 0.00 0.00 00:29:20.219 [2024-12-12T09:42:54.242Z] =================================================================================================================== 00:29:20.219 [2024-12-12T09:42:54.242Z] Total : 23515.00 91.86 0.00 0.00 0.00 0.00 0.00 00:29:20.219 00:29:21.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.155 Nvme0n1 : 6.00 23596.33 92.17 0.00 0.00 0.00 0.00 0.00 00:29:21.155 [2024-12-12T09:42:55.178Z] =================================================================================================================== 00:29:21.155 [2024-12-12T09:42:55.178Z] Total : 23596.33 92.17 0.00 0.00 0.00 0.00 0.00 00:29:21.155 00:29:22.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.091 Nvme0n1 : 7.00 23636.29 92.33 0.00 0.00 0.00 0.00 0.00 00:29:22.091 [2024-12-12T09:42:56.114Z] =================================================================================================================== 00:29:22.091 [2024-12-12T09:42:56.114Z] Total : 23636.29 92.33 0.00 0.00 0.00 0.00 0.00 00:29:22.091 00:29:23.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.524 Nvme0n1 : 8.00 23682.12 92.51 0.00 0.00 0.00 0.00 0.00 00:29:23.524 [2024-12-12T09:42:57.547Z] =================================================================================================================== 00:29:23.524 [2024-12-12T09:42:57.547Z] Total : 23682.12 92.51 0.00 0.00 0.00 0.00 0.00 00:29:23.524 00:29:24.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.124 Nvme0n1 : 9.00 23703.67 92.59 0.00 0.00 0.00 0.00 0.00 00:29:24.124 [2024-12-12T09:42:58.147Z] =================================================================================================================== 00:29:24.124 [2024-12-12T09:42:58.147Z] Total : 23703.67 92.59 0.00 0.00 0.00 0.00 0.00 00:29:24.124 00:29:25.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.501 Nvme0n1 : 10.00 23733.60 92.71 0.00 0.00 0.00 0.00 0.00 00:29:25.501 [2024-12-12T09:42:59.524Z] =================================================================================================================== 00:29:25.501 [2024-12-12T09:42:59.524Z] Total : 23733.60 92.71 0.00 0.00 0.00 0.00 0.00 00:29:25.501 00:29:25.501 00:29:25.501 Latency(us) 00:29:25.501 [2024-12-12T09:42:59.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.501 Nvme0n1 : 10.01 23733.47 92.71 0.00 0.00 5390.06 3229.99 26588.89 00:29:25.501 [2024-12-12T09:42:59.524Z] =================================================================================================================== 00:29:25.501 [2024-12-12T09:42:59.524Z] Total : 23733.47 92.71 0.00 0.00 5390.06 3229.99 26588.89 00:29:25.501 { 00:29:25.501 "results": [ 00:29:25.501 { 00:29:25.501 "job": "Nvme0n1", 00:29:25.501 "core_mask": "0x2", 00:29:25.501 "workload": "randwrite", 00:29:25.501 "status": "finished", 00:29:25.501 "queue_depth": 128, 00:29:25.501 "io_size": 4096, 00:29:25.501 "runtime": 10.005448, 00:29:25.501 "iops": 23733.47000554098, 00:29:25.501 "mibps": 92.70886720914446, 00:29:25.501 "io_failed": 0, 00:29:25.501 "io_timeout": 0, 00:29:25.501 "avg_latency_us": 5390.057763253939, 00:29:25.501 "min_latency_us": 3229.9885714285715, 00:29:25.501 "max_latency_us": 26588.891428571427 00:29:25.501 } 00:29:25.501 ], 00:29:25.501 "core_count": 1 00:29:25.501 } 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1709035 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1709035 ']' 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1709035 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709035 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709035' 00:29:25.501 killing process with pid 1709035 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1709035 00:29:25.501 Received shutdown signal, test time was about 10.000000 seconds 00:29:25.501 00:29:25.501 Latency(us) 00:29:25.501 [2024-12-12T09:42:59.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.501 [2024-12-12T09:42:59.524Z] =================================================================================================================== 00:29:25.501 [2024-12-12T09:42:59.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1709035 00:29:25.501 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.760 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:25.760 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:25.760 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:26.019 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:26.019 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:26.019 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:26.277 [2024-12-12 10:43:00.126606] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:26.277 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:26.536 request: 00:29:26.536 { 00:29:26.536 "uuid": "6a2f9a44-8fea-4a8f-a549-869b8fbc3309", 00:29:26.536 "method": "bdev_lvol_get_lvstores", 00:29:26.536 "req_id": 1 00:29:26.536 } 00:29:26.536 Got JSON-RPC error response 00:29:26.536 response: 00:29:26.536 { 00:29:26.536 "code": -19, 00:29:26.536 "message": "No such device" 00:29:26.536 } 00:29:26.536 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:26.536 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:26.536 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:26.536 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:26.536 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:26.536 aio_bdev 00:29:26.795 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e6dd6ab9-71d8-4eb6-9e08-0e564b497080 00:29:26.795 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e6dd6ab9-71d8-4eb6-9e08-0e564b497080 00:29:26.795 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:26.795 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:26.795 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:26.795 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:26.795 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:26.795 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e6dd6ab9-71d8-4eb6-9e08-0e564b497080 -t 2000 00:29:27.053 [ 00:29:27.053 { 00:29:27.053 "name": "e6dd6ab9-71d8-4eb6-9e08-0e564b497080", 00:29:27.053 "aliases": [ 00:29:27.053 "lvs/lvol" 00:29:27.053 ], 00:29:27.053 "product_name": "Logical Volume", 00:29:27.053 "block_size": 4096, 00:29:27.053 "num_blocks": 38912, 00:29:27.053 "uuid": "e6dd6ab9-71d8-4eb6-9e08-0e564b497080", 00:29:27.053 "assigned_rate_limits": { 00:29:27.053 "rw_ios_per_sec": 0, 00:29:27.053 "rw_mbytes_per_sec": 0, 00:29:27.053 "r_mbytes_per_sec": 0, 00:29:27.053 "w_mbytes_per_sec": 0 00:29:27.053 }, 00:29:27.053 "claimed": false, 00:29:27.053 "zoned": false, 00:29:27.053 "supported_io_types": { 00:29:27.053 "read": true, 00:29:27.053 "write": true, 00:29:27.053 "unmap": true, 00:29:27.053 "flush": false, 00:29:27.053 "reset": true, 00:29:27.053 "nvme_admin": false, 00:29:27.053 "nvme_io": false, 00:29:27.053 "nvme_io_md": false, 00:29:27.053 "write_zeroes": true, 00:29:27.053 "zcopy": false, 00:29:27.053 "get_zone_info": false, 00:29:27.053 "zone_management": false, 00:29:27.053 "zone_append": false, 00:29:27.053 "compare": false, 00:29:27.053 "compare_and_write": false, 00:29:27.053 "abort": false, 00:29:27.053 "seek_hole": true, 00:29:27.053 "seek_data": true, 00:29:27.053 "copy": false, 00:29:27.053 "nvme_iov_md": false 00:29:27.053 }, 00:29:27.053 "driver_specific": { 00:29:27.053 "lvol": { 00:29:27.053 "lvol_store_uuid": "6a2f9a44-8fea-4a8f-a549-869b8fbc3309", 00:29:27.053 "base_bdev": "aio_bdev", 00:29:27.053 "thin_provision": false, 00:29:27.053 "num_allocated_clusters": 38, 00:29:27.053 "snapshot": false, 00:29:27.053 "clone": false, 00:29:27.053 "esnap_clone": false 00:29:27.053 } 00:29:27.054 } 00:29:27.054 } 00:29:27.054 ] 00:29:27.054 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:27.054 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:27.054 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:27.312 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:27.312 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:27.312 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:27.571 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:27.571 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e6dd6ab9-71d8-4eb6-9e08-0e564b497080 00:29:27.571 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a2f9a44-8fea-4a8f-a549-869b8fbc3309 00:29:27.830 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:28.088 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.088 00:29:28.088 real 0m15.564s 00:29:28.088 user 0m15.075s 00:29:28.088 sys 0m1.496s 00:29:28.088 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.088 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.088 ************************************ 00:29:28.088 END TEST lvs_grow_clean 00:29:28.088 ************************************ 00:29:28.088 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:28.088 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.088 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.088 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:28.088 ************************************ 00:29:28.088 START TEST lvs_grow_dirty 00:29:28.088 ************************************ 00:29:28.088 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:28.088 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:28.088 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:28.089 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:28.089 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:28.089 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:28.089 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:28.089 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.089 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.089 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:28.347 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:28.347 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:28.606 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:28.606 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:28.606 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:28.865 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:28.865 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:28.865 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 lvol 150 00:29:28.865 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=930047ea-6d36-4029-8a58-b0cb85678dd4 00:29:28.865 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:28.865 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:29.123 [2024-12-12 10:43:03.022516] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:29.123 [2024-12-12 10:43:03.022669] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:29.123 true 00:29:29.123 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:29.123 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:29.382 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:29.382 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:29.641 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 930047ea-6d36-4029-8a58-b0cb85678dd4 00:29:29.641 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:29.900 [2024-12-12 10:43:03.783062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.900 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1711596 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1711596 /var/tmp/bdevperf.sock 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1711596 ']' 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:30.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.159 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:30.159 [2024-12-12 10:43:04.029336] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:29:30.159 [2024-12-12 10:43:04.029388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711596 ] 00:29:30.159 [2024-12-12 10:43:04.102995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.159 [2024-12-12 10:43:04.143458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.417 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.417 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:30.417 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:30.676 Nvme0n1 00:29:30.676 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:30.934 [ 00:29:30.934 { 00:29:30.934 "name": "Nvme0n1", 00:29:30.934 "aliases": [ 00:29:30.934 "930047ea-6d36-4029-8a58-b0cb85678dd4" 00:29:30.934 ], 00:29:30.934 "product_name": "NVMe disk", 00:29:30.934 "block_size": 4096, 00:29:30.934 "num_blocks": 38912, 00:29:30.934 "uuid": "930047ea-6d36-4029-8a58-b0cb85678dd4", 00:29:30.934 "numa_id": 1, 00:29:30.934 "assigned_rate_limits": { 00:29:30.934 "rw_ios_per_sec": 0, 00:29:30.934 "rw_mbytes_per_sec": 0, 00:29:30.934 "r_mbytes_per_sec": 0, 00:29:30.934 "w_mbytes_per_sec": 0 00:29:30.934 }, 00:29:30.934 "claimed": false, 00:29:30.934 "zoned": false, 00:29:30.934 "supported_io_types": { 00:29:30.934 "read": true, 00:29:30.934 "write": true, 00:29:30.934 "unmap": true, 00:29:30.934 "flush": true, 00:29:30.934 "reset": true, 00:29:30.934 "nvme_admin": true, 00:29:30.934 "nvme_io": true, 00:29:30.934 "nvme_io_md": false, 00:29:30.934 "write_zeroes": true, 00:29:30.934 "zcopy": false, 00:29:30.934 "get_zone_info": false, 00:29:30.934 "zone_management": false, 00:29:30.934 "zone_append": false, 00:29:30.934 "compare": true, 00:29:30.934 "compare_and_write": true, 00:29:30.934 "abort": true, 00:29:30.934 "seek_hole": false, 00:29:30.934 "seek_data": false, 00:29:30.934 "copy": true, 00:29:30.934 "nvme_iov_md": false 00:29:30.934 }, 00:29:30.934 "memory_domains": [ 00:29:30.934 { 00:29:30.934 "dma_device_id": "system", 00:29:30.934 "dma_device_type": 1 00:29:30.934 } 00:29:30.934 ], 00:29:30.934 "driver_specific": { 00:29:30.934 "nvme": [ 00:29:30.934 { 00:29:30.934 "trid": { 00:29:30.934 "trtype": "TCP", 00:29:30.934 "adrfam": "IPv4", 00:29:30.934 "traddr": "10.0.0.2", 00:29:30.934 "trsvcid": "4420", 00:29:30.934 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:30.934 }, 00:29:30.934 "ctrlr_data": { 00:29:30.934 "cntlid": 1, 00:29:30.934 "vendor_id": "0x8086", 00:29:30.934 "model_number": "SPDK bdev Controller", 00:29:30.934 "serial_number": "SPDK0", 00:29:30.934 "firmware_revision": "25.01", 00:29:30.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:30.934 "oacs": { 00:29:30.934 "security": 0, 00:29:30.934 "format": 0, 00:29:30.934 "firmware": 0, 00:29:30.934 "ns_manage": 0 00:29:30.934 }, 00:29:30.934 "multi_ctrlr": true, 00:29:30.934 "ana_reporting": false 00:29:30.934 }, 00:29:30.934 "vs": { 00:29:30.934 "nvme_version": "1.3" 00:29:30.934 }, 00:29:30.934 "ns_data": { 00:29:30.934 "id": 1, 00:29:30.934 "can_share": true 00:29:30.934 } 00:29:30.934 } 00:29:30.934 ], 00:29:30.934 "mp_policy": "active_passive" 00:29:30.934 } 00:29:30.934 } 00:29:30.934 ] 00:29:30.934 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1711609 00:29:30.934 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:30.934 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:30.934 Running I/O for 10 seconds... 00:29:32.309 Latency(us) 00:29:32.309 [2024-12-12T09:43:06.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.309 Nvme0n1 : 1.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:29:32.309 [2024-12-12T09:43:06.332Z] =================================================================================================================== 00:29:32.309 [2024-12-12T09:43:06.332Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:29:32.309 00:29:32.877 10:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:33.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.135 Nvme0n1 : 2.00 23376.50 91.31 0.00 0.00 0.00 0.00 0.00 00:29:33.135 [2024-12-12T09:43:07.158Z] =================================================================================================================== 00:29:33.135 [2024-12-12T09:43:07.158Z] Total : 23376.50 91.31 0.00 0.00 0.00 0.00 0.00 00:29:33.135 00:29:33.135 true 00:29:33.135 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:33.135 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:33.394 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:33.394 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:33.394 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1711609 00:29:33.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.962 Nvme0n1 : 3.00 23500.67 91.80 0.00 0.00 0.00 0.00 0.00 00:29:33.962 [2024-12-12T09:43:07.985Z] =================================================================================================================== 00:29:33.962 [2024-12-12T09:43:07.985Z] Total : 23500.67 91.80 0.00 0.00 0.00 0.00 0.00 00:29:33.962 00:29:35.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.339 Nvme0n1 : 4.00 23594.50 92.17 0.00 0.00 0.00 0.00 0.00 00:29:35.339 [2024-12-12T09:43:09.362Z] =================================================================================================================== 00:29:35.339 [2024-12-12T09:43:09.362Z] Total : 23594.50 92.17 0.00 0.00 0.00 0.00 0.00 00:29:35.339 00:29:35.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.906 Nvme0n1 : 5.00 23650.80 92.39 0.00 0.00 0.00 0.00 0.00 00:29:35.906 [2024-12-12T09:43:09.929Z] =================================================================================================================== 00:29:35.906 [2024-12-12T09:43:09.929Z] Total : 23650.80 92.39 0.00 0.00 0.00 0.00 0.00 00:29:35.906 00:29:37.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.282 Nvme0n1 : 6.00 23667.17 92.45 0.00 0.00 0.00 0.00 0.00 00:29:37.282 [2024-12-12T09:43:11.305Z] =================================================================================================================== 00:29:37.282 [2024-12-12T09:43:11.305Z] Total : 23667.17 92.45 0.00 0.00 0.00 0.00 0.00 00:29:37.282 00:29:38.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.218 Nvme0n1 : 7.00 23697.00 92.57 0.00 0.00 0.00 0.00 0.00 00:29:38.218 [2024-12-12T09:43:12.241Z] =================================================================================================================== 00:29:38.218 [2024-12-12T09:43:12.241Z] Total : 23697.00 92.57 0.00 0.00 0.00 0.00 0.00 00:29:38.218 00:29:39.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.154 Nvme0n1 : 8.00 23735.25 92.72 0.00 0.00 0.00 0.00 0.00 00:29:39.154 [2024-12-12T09:43:13.177Z] =================================================================================================================== 00:29:39.154 [2024-12-12T09:43:13.177Z] Total : 23735.25 92.72 0.00 0.00 0.00 0.00 0.00 00:29:39.154 00:29:40.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.090 Nvme0n1 : 9.00 23765.00 92.83 0.00 0.00 0.00 0.00 0.00 00:29:40.090 [2024-12-12T09:43:14.113Z] =================================================================================================================== 00:29:40.090 [2024-12-12T09:43:14.113Z] Total : 23765.00 92.83 0.00 0.00 0.00 0.00 0.00 00:29:40.090 00:29:41.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.026 Nvme0n1 : 10.00 23788.80 92.92 0.00 0.00 0.00 0.00 0.00 00:29:41.026 [2024-12-12T09:43:15.049Z] =================================================================================================================== 00:29:41.026 [2024-12-12T09:43:15.049Z] Total : 23788.80 92.92 0.00 0.00 0.00 0.00 0.00 00:29:41.026 00:29:41.026 00:29:41.027 Latency(us) 00:29:41.027 [2024-12-12T09:43:15.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.027 Nvme0n1 : 10.00 23794.59 92.95 0.00 0.00 5376.67 3229.99 26214.40 00:29:41.027 [2024-12-12T09:43:15.050Z] =================================================================================================================== 00:29:41.027 [2024-12-12T09:43:15.050Z] Total : 23794.59 92.95 0.00 0.00 5376.67 3229.99 26214.40 00:29:41.027 { 00:29:41.027 "results": [ 00:29:41.027 { 00:29:41.027 "job": "Nvme0n1", 00:29:41.027 "core_mask": "0x2", 00:29:41.027 "workload": "randwrite", 00:29:41.027 "status": "finished", 00:29:41.027 "queue_depth": 128, 00:29:41.027 "io_size": 4096, 00:29:41.027 "runtime": 10.002948, 00:29:41.027 "iops": 23794.58535623698, 00:29:41.027 "mibps": 92.94759904780071, 00:29:41.027 "io_failed": 0, 00:29:41.027 "io_timeout": 0, 00:29:41.027 "avg_latency_us": 5376.674458827898, 00:29:41.027 "min_latency_us": 3229.9885714285715, 00:29:41.027 "max_latency_us": 26214.4 00:29:41.027 } 00:29:41.027 ], 00:29:41.027 "core_count": 1 00:29:41.027 } 00:29:41.027 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1711596 00:29:41.027 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1711596 ']' 00:29:41.027 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1711596 00:29:41.027 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:41.027 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.027 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1711596 00:29:41.027 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:41.027 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:41.027 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1711596' 00:29:41.027 killing process with pid 1711596 00:29:41.027 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1711596 00:29:41.027 Received shutdown signal, test time was about 10.000000 seconds 00:29:41.027 00:29:41.027 Latency(us) 00:29:41.027 [2024-12-12T09:43:15.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.027 [2024-12-12T09:43:15.050Z] =================================================================================================================== 00:29:41.027 [2024-12-12T09:43:15.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.027 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1711596 00:29:41.285 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.543 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:41.543 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:41.543 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1708602 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1708602 00:29:41.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1708602 Killed "${NVMF_APP[@]}" "$@" 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1713411 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1713411 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1713411 ']' 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.802 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:42.060 [2024-12-12 10:43:15.853505] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:42.060 [2024-12-12 10:43:15.854409] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:29:42.060 [2024-12-12 10:43:15.854450] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.061 [2024-12-12 10:43:15.935110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.061 [2024-12-12 10:43:15.974962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.061 [2024-12-12 10:43:15.974997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.061 [2024-12-12 10:43:15.975004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.061 [2024-12-12 10:43:15.975010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.061 [2024-12-12 10:43:15.975015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.061 [2024-12-12 10:43:15.975510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.061 [2024-12-12 10:43:16.043185] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:42.061 [2024-12-12 10:43:16.043380] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:42.061 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.061 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:42.061 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:42.061 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.061 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:42.319 [2024-12-12 10:43:16.272860] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:42.319 [2024-12-12 10:43:16.273065] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:42.319 [2024-12-12 10:43:16.273150] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 930047ea-6d36-4029-8a58-b0cb85678dd4 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=930047ea-6d36-4029-8a58-b0cb85678dd4 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:42.319 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:42.578 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 930047ea-6d36-4029-8a58-b0cb85678dd4 -t 2000 00:29:42.836 [ 00:29:42.836 { 00:29:42.836 "name": "930047ea-6d36-4029-8a58-b0cb85678dd4", 00:29:42.836 "aliases": [ 00:29:42.836 "lvs/lvol" 00:29:42.836 ], 00:29:42.836 "product_name": "Logical Volume", 00:29:42.836 "block_size": 4096, 00:29:42.836 "num_blocks": 38912, 00:29:42.836 "uuid": "930047ea-6d36-4029-8a58-b0cb85678dd4", 00:29:42.836 "assigned_rate_limits": { 00:29:42.836 "rw_ios_per_sec": 0, 00:29:42.836 "rw_mbytes_per_sec": 0, 00:29:42.836 "r_mbytes_per_sec": 0, 00:29:42.836 "w_mbytes_per_sec": 0 00:29:42.836 }, 00:29:42.836 "claimed": false, 00:29:42.836 "zoned": false, 00:29:42.836 "supported_io_types": { 00:29:42.836 "read": true, 00:29:42.836 "write": true, 00:29:42.836 "unmap": true, 00:29:42.836 "flush": false, 00:29:42.836 "reset": true, 00:29:42.836 "nvme_admin": false, 00:29:42.836 "nvme_io": false, 00:29:42.836 "nvme_io_md": false, 00:29:42.836 "write_zeroes": true, 00:29:42.836 "zcopy": false, 00:29:42.836 "get_zone_info": false, 00:29:42.836 "zone_management": false, 00:29:42.836 "zone_append": false, 00:29:42.836 "compare": false, 00:29:42.836 "compare_and_write": false, 00:29:42.836 "abort": false, 00:29:42.836 "seek_hole": true, 00:29:42.836 "seek_data": true, 00:29:42.836 "copy": false, 00:29:42.836 "nvme_iov_md": false 00:29:42.836 }, 00:29:42.836 "driver_specific": { 00:29:42.836 "lvol": { 00:29:42.836 "lvol_store_uuid": "b4cbbd9d-caf4-4820-b395-9ecf67456cc0", 00:29:42.836 "base_bdev": "aio_bdev", 00:29:42.836 "thin_provision": false, 00:29:42.836 "num_allocated_clusters": 38, 00:29:42.836 "snapshot": false, 00:29:42.836 "clone": false, 00:29:42.836 "esnap_clone": false 00:29:42.836 } 00:29:42.836 } 00:29:42.836 } 00:29:42.836 ] 00:29:42.836 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:42.836 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:42.836 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:43.095 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:43.095 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:43.095 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:43.095 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:43.095 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:43.354 [2024-12-12 10:43:17.271968] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:43.354 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:43.613 request: 00:29:43.613 { 00:29:43.613 "uuid": "b4cbbd9d-caf4-4820-b395-9ecf67456cc0", 00:29:43.613 "method": "bdev_lvol_get_lvstores", 00:29:43.613 "req_id": 1 00:29:43.613 } 00:29:43.613 Got JSON-RPC error response 00:29:43.613 response: 00:29:43.613 { 00:29:43.613 "code": -19, 00:29:43.613 "message": "No such device" 00:29:43.613 } 00:29:43.613 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:43.613 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:43.613 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:43.613 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:43.613 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:43.872 aio_bdev 00:29:43.872 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 930047ea-6d36-4029-8a58-b0cb85678dd4 00:29:43.872 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=930047ea-6d36-4029-8a58-b0cb85678dd4 00:29:43.872 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:43.872 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:43.872 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:43.872 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:43.872 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:44.130 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 930047ea-6d36-4029-8a58-b0cb85678dd4 -t 2000 00:29:44.130 [ 00:29:44.130 { 00:29:44.130 "name": "930047ea-6d36-4029-8a58-b0cb85678dd4", 00:29:44.130 "aliases": [ 00:29:44.130 "lvs/lvol" 00:29:44.130 ], 00:29:44.130 "product_name": "Logical Volume", 00:29:44.130 "block_size": 4096, 00:29:44.130 "num_blocks": 38912, 00:29:44.130 "uuid": "930047ea-6d36-4029-8a58-b0cb85678dd4", 00:29:44.130 "assigned_rate_limits": { 00:29:44.130 "rw_ios_per_sec": 0, 00:29:44.130 "rw_mbytes_per_sec": 0, 00:29:44.130 "r_mbytes_per_sec": 0, 00:29:44.130 "w_mbytes_per_sec": 0 00:29:44.130 }, 00:29:44.130 "claimed": false, 00:29:44.130 "zoned": false, 00:29:44.130 "supported_io_types": { 00:29:44.130 "read": true, 00:29:44.130 "write": true, 00:29:44.130 "unmap": true, 00:29:44.130 "flush": false, 00:29:44.130 "reset": true, 00:29:44.130 "nvme_admin": false, 00:29:44.130 "nvme_io": false, 00:29:44.130 "nvme_io_md": false, 00:29:44.130 "write_zeroes": true, 00:29:44.130 "zcopy": false, 00:29:44.130 "get_zone_info": false, 00:29:44.130 "zone_management": false, 00:29:44.130 "zone_append": false, 00:29:44.130 "compare": false, 00:29:44.130 "compare_and_write": false, 00:29:44.130 "abort": false, 00:29:44.130 "seek_hole": true, 00:29:44.130 "seek_data": true, 00:29:44.130 "copy": false, 00:29:44.130 "nvme_iov_md": false 00:29:44.130 }, 00:29:44.130 "driver_specific": { 00:29:44.130 "lvol": { 00:29:44.130 "lvol_store_uuid": "b4cbbd9d-caf4-4820-b395-9ecf67456cc0", 00:29:44.130 "base_bdev": "aio_bdev", 00:29:44.130 "thin_provision": false, 00:29:44.130 "num_allocated_clusters": 38, 00:29:44.130 "snapshot": false, 00:29:44.130 "clone": false, 00:29:44.130 "esnap_clone": false 00:29:44.130 } 00:29:44.130 } 00:29:44.130 } 00:29:44.130 ] 00:29:44.130 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:44.130 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:44.130 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:44.389 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:44.389 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:44.389 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:44.648 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:44.648 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 930047ea-6d36-4029-8a58-b0cb85678dd4 00:29:44.648 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b4cbbd9d-caf4-4820-b395-9ecf67456cc0 00:29:44.907 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:45.166 00:29:45.166 real 0m17.075s 00:29:45.166 user 0m34.577s 00:29:45.166 sys 0m3.704s 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:45.166 ************************************ 00:29:45.166 END TEST lvs_grow_dirty 00:29:45.166 ************************************ 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:45.166 nvmf_trace.0 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.166 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.425 rmmod nvme_tcp 00:29:45.425 rmmod nvme_fabrics 00:29:45.425 rmmod nvme_keyring 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1713411 ']' 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1713411 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1713411 ']' 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1713411 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1713411 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1713411' 00:29:45.425 killing process with pid 1713411 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1713411 00:29:45.425 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1713411 00:29:45.684 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.685 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.591 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.591 00:29:47.591 real 0m41.842s 00:29:47.591 user 0m52.085s 00:29:47.591 sys 0m10.117s 00:29:47.591 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.591 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:47.591 ************************************ 00:29:47.591 END TEST nvmf_lvs_grow 00:29:47.591 ************************************ 00:29:47.591 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:47.591 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:47.591 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.591 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:47.591 ************************************ 00:29:47.591 START TEST nvmf_bdev_io_wait 00:29:47.591 ************************************ 00:29:47.591 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:47.852 * Looking for test storage... 00:29:47.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:47.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.852 --rc genhtml_branch_coverage=1 00:29:47.852 --rc genhtml_function_coverage=1 00:29:47.852 --rc genhtml_legend=1 00:29:47.852 --rc geninfo_all_blocks=1 00:29:47.852 --rc geninfo_unexecuted_blocks=1 00:29:47.852 00:29:47.852 ' 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:47.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.852 --rc genhtml_branch_coverage=1 00:29:47.852 --rc genhtml_function_coverage=1 00:29:47.852 --rc genhtml_legend=1 00:29:47.852 --rc geninfo_all_blocks=1 00:29:47.852 --rc geninfo_unexecuted_blocks=1 00:29:47.852 00:29:47.852 ' 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:47.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.852 --rc genhtml_branch_coverage=1 00:29:47.852 --rc genhtml_function_coverage=1 00:29:47.852 --rc genhtml_legend=1 00:29:47.852 --rc geninfo_all_blocks=1 00:29:47.852 --rc geninfo_unexecuted_blocks=1 00:29:47.852 00:29:47.852 ' 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:47.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.852 --rc genhtml_branch_coverage=1 00:29:47.852 --rc genhtml_function_coverage=1 00:29:47.852 --rc genhtml_legend=1 00:29:47.852 --rc geninfo_all_blocks=1 00:29:47.852 --rc geninfo_unexecuted_blocks=1 00:29:47.852 00:29:47.852 ' 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.852 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.853 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:54.422 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:54.422 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.422 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:54.423 Found net devices under 0000:af:00.0: cvl_0_0 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:54.423 Found net devices under 0000:af:00.1: cvl_0_1 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:29:54.423 00:29:54.423 --- 10.0.0.2 ping statistics --- 00:29:54.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.423 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:29:54.423 00:29:54.423 --- 10.0.0.1 ping statistics --- 00:29:54.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.423 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1717387 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1717387 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1717387 ']' 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.423 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.423 [2024-12-12 10:43:27.724240] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:54.423 [2024-12-12 10:43:27.725254] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:29:54.423 [2024-12-12 10:43:27.725293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.423 [2024-12-12 10:43:27.806979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.423 [2024-12-12 10:43:27.854531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.423 [2024-12-12 10:43:27.854567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.423 [2024-12-12 10:43:27.854579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.423 [2024-12-12 10:43:27.854588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.423 [2024-12-12 10:43:27.854593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.423 [2024-12-12 10:43:27.855909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.423 [2024-12-12 10:43:27.855936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.423 [2024-12-12 10:43:27.855960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.423 [2024-12-12 10:43:27.855961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.423 [2024-12-12 10:43:27.856455] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.683 [2024-12-12 10:43:28.665162] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:54.683 [2024-12-12 10:43:28.665931] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:54.683 [2024-12-12 10:43:28.666039] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:54.683 [2024-12-12 10:43:28.666178] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.683 [2024-12-12 10:43:28.676714] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.683 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.943 Malloc0 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:54.943 [2024-12-12 10:43:28.749089] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1717629 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1717631 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:54.943 { 00:29:54.943 "params": { 00:29:54.943 "name": "Nvme$subsystem", 00:29:54.943 "trtype": "$TEST_TRANSPORT", 00:29:54.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.943 "adrfam": "ipv4", 00:29:54.943 "trsvcid": "$NVMF_PORT", 00:29:54.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.943 "hdgst": ${hdgst:-false}, 00:29:54.943 "ddgst": ${ddgst:-false} 00:29:54.943 }, 00:29:54.943 "method": "bdev_nvme_attach_controller" 00:29:54.943 } 00:29:54.943 EOF 00:29:54.943 )") 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1717633 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:54.943 { 00:29:54.943 "params": { 00:29:54.943 "name": "Nvme$subsystem", 00:29:54.943 "trtype": "$TEST_TRANSPORT", 00:29:54.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.943 "adrfam": "ipv4", 00:29:54.943 "trsvcid": "$NVMF_PORT", 00:29:54.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.943 "hdgst": ${hdgst:-false}, 00:29:54.943 "ddgst": ${ddgst:-false} 00:29:54.943 }, 00:29:54.943 "method": "bdev_nvme_attach_controller" 00:29:54.943 } 00:29:54.943 EOF 00:29:54.943 )") 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1717636 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:54.943 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:54.944 { 00:29:54.944 "params": { 00:29:54.944 "name": "Nvme$subsystem", 00:29:54.944 "trtype": "$TEST_TRANSPORT", 00:29:54.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.944 "adrfam": "ipv4", 00:29:54.944 "trsvcid": "$NVMF_PORT", 00:29:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.944 "hdgst": ${hdgst:-false}, 00:29:54.944 "ddgst": ${ddgst:-false} 00:29:54.944 }, 00:29:54.944 "method": "bdev_nvme_attach_controller" 00:29:54.944 } 00:29:54.944 EOF 00:29:54.944 )") 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:54.944 { 00:29:54.944 "params": { 00:29:54.944 "name": "Nvme$subsystem", 00:29:54.944 "trtype": "$TEST_TRANSPORT", 00:29:54.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.944 "adrfam": "ipv4", 00:29:54.944 "trsvcid": "$NVMF_PORT", 00:29:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.944 "hdgst": ${hdgst:-false}, 00:29:54.944 "ddgst": ${ddgst:-false} 00:29:54.944 }, 00:29:54.944 "method": "bdev_nvme_attach_controller" 00:29:54.944 } 00:29:54.944 EOF 00:29:54.944 )") 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1717629 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:54.944 "params": { 00:29:54.944 "name": "Nvme1", 00:29:54.944 "trtype": "tcp", 00:29:54.944 "traddr": "10.0.0.2", 00:29:54.944 "adrfam": "ipv4", 00:29:54.944 "trsvcid": "4420", 00:29:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.944 "hdgst": false, 00:29:54.944 "ddgst": false 00:29:54.944 }, 00:29:54.944 "method": "bdev_nvme_attach_controller" 00:29:54.944 }' 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:54.944 "params": { 00:29:54.944 "name": "Nvme1", 00:29:54.944 "trtype": "tcp", 00:29:54.944 "traddr": "10.0.0.2", 00:29:54.944 "adrfam": "ipv4", 00:29:54.944 "trsvcid": "4420", 00:29:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.944 "hdgst": false, 00:29:54.944 "ddgst": false 00:29:54.944 }, 00:29:54.944 "method": "bdev_nvme_attach_controller" 00:29:54.944 }' 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:54.944 "params": { 00:29:54.944 "name": "Nvme1", 00:29:54.944 "trtype": "tcp", 00:29:54.944 "traddr": "10.0.0.2", 00:29:54.944 "adrfam": "ipv4", 00:29:54.944 "trsvcid": "4420", 00:29:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.944 "hdgst": false, 00:29:54.944 "ddgst": false 00:29:54.944 }, 00:29:54.944 "method": "bdev_nvme_attach_controller" 00:29:54.944 }' 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:54.944 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:54.944 "params": { 00:29:54.944 "name": "Nvme1", 00:29:54.944 "trtype": "tcp", 00:29:54.944 "traddr": "10.0.0.2", 00:29:54.944 "adrfam": "ipv4", 00:29:54.944 "trsvcid": "4420", 00:29:54.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.944 "hdgst": false, 00:29:54.944 "ddgst": false 00:29:54.944 }, 00:29:54.944 "method": "bdev_nvme_attach_controller" 00:29:54.944 }' 00:29:54.944 [2024-12-12 10:43:28.798896] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:29:54.944 [2024-12-12 10:43:28.798949] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:54.944 [2024-12-12 10:43:28.803059] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:29:54.944 [2024-12-12 10:43:28.803101] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:54.944 [2024-12-12 10:43:28.804267] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:29:54.944 [2024-12-12 10:43:28.804307] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:54.944 [2024-12-12 10:43:28.804996] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:29:54.944 [2024-12-12 10:43:28.805038] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:55.203 [2024-12-12 10:43:28.980086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.203 [2024-12-12 10:43:29.025074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:55.203 [2024-12-12 10:43:29.083759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.203 [2024-12-12 10:43:29.142594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.203 [2024-12-12 10:43:29.157568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:29:55.203 [2024-12-12 10:43:29.185629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:29:55.203 [2024-12-12 10:43:29.204268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.462 [2024-12-12 10:43:29.244928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:29:55.462 Running I/O for 1 seconds... 00:29:55.462 Running I/O for 1 seconds... 00:29:55.462 Running I/O for 1 seconds... 00:29:55.720 Running I/O for 1 seconds... 00:29:56.545 8252.00 IOPS, 32.23 MiB/s 00:29:56.545 Latency(us) 00:29:56.545 [2024-12-12T09:43:30.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.545 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:56.545 Nvme1n1 : 1.02 8262.47 32.28 0.00 0.00 15405.17 1513.57 22344.66 00:29:56.545 [2024-12-12T09:43:30.568Z] =================================================================================================================== 00:29:56.545 [2024-12-12T09:43:30.568Z] Total : 8262.47 32.28 0.00 0.00 15405.17 1513.57 22344.66 00:29:56.545 240488.00 IOPS, 939.41 MiB/s [2024-12-12T09:43:30.568Z] 7685.00 IOPS, 30.02 MiB/s 00:29:56.545 Latency(us) 00:29:56.545 [2024-12-12T09:43:30.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.546 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:56.546 Nvme1n1 : 1.00 240125.49 937.99 0.00 0.00 530.06 222.35 1490.16 00:29:56.546 [2024-12-12T09:43:30.569Z] =================================================================================================================== 00:29:56.546 [2024-12-12T09:43:30.569Z] Total : 240125.49 937.99 0.00 0.00 530.06 222.35 1490.16 00:29:56.546 00:29:56.546 Latency(us) 00:29:56.546 [2024-12-12T09:43:30.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.546 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:56.546 Nvme1n1 : 1.01 7771.54 30.36 0.00 0.00 16418.58 4805.97 23842.62 00:29:56.546 [2024-12-12T09:43:30.569Z] =================================================================================================================== 00:29:56.546 [2024-12-12T09:43:30.569Z] Total : 7771.54 30.36 0.00 0.00 16418.58 4805.97 23842.62 00:29:56.546 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1717631 00:29:56.546 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1717633 00:29:56.546 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1717636 00:29:56.546 13778.00 IOPS, 53.82 MiB/s 00:29:56.546 Latency(us) 00:29:56.546 [2024-12-12T09:43:30.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.546 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:56.546 Nvme1n1 : 1.00 13864.32 54.16 0.00 0.00 9213.26 1825.65 14542.75 00:29:56.546 [2024-12-12T09:43:30.569Z] =================================================================================================================== 00:29:56.546 [2024-12-12T09:43:30.569Z] Total : 13864.32 54.16 0.00 0.00 9213.26 1825.65 14542.75 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.804 rmmod nvme_tcp 00:29:56.804 rmmod nvme_fabrics 00:29:56.804 rmmod nvme_keyring 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1717387 ']' 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1717387 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1717387 ']' 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1717387 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1717387 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1717387' 00:29:56.804 killing process with pid 1717387 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1717387 00:29:56.804 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1717387 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.063 10:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.598 00:29:59.598 real 0m11.394s 00:29:59.598 user 0m15.122s 00:29:59.598 sys 0m6.388s 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:59.598 ************************************ 00:29:59.598 END TEST nvmf_bdev_io_wait 00:29:59.598 ************************************ 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:59.598 ************************************ 00:29:59.598 START TEST nvmf_queue_depth 00:29:59.598 ************************************ 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:59.598 * Looking for test storage... 00:29:59.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:59.598 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:59.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.599 --rc genhtml_branch_coverage=1 00:29:59.599 --rc genhtml_function_coverage=1 00:29:59.599 --rc genhtml_legend=1 00:29:59.599 --rc geninfo_all_blocks=1 00:29:59.599 --rc geninfo_unexecuted_blocks=1 00:29:59.599 00:29:59.599 ' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:59.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.599 --rc genhtml_branch_coverage=1 00:29:59.599 --rc genhtml_function_coverage=1 00:29:59.599 --rc genhtml_legend=1 00:29:59.599 --rc geninfo_all_blocks=1 00:29:59.599 --rc geninfo_unexecuted_blocks=1 00:29:59.599 00:29:59.599 ' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:59.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.599 --rc genhtml_branch_coverage=1 00:29:59.599 --rc genhtml_function_coverage=1 00:29:59.599 --rc genhtml_legend=1 00:29:59.599 --rc geninfo_all_blocks=1 00:29:59.599 --rc geninfo_unexecuted_blocks=1 00:29:59.599 00:29:59.599 ' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:59.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.599 --rc genhtml_branch_coverage=1 00:29:59.599 --rc genhtml_function_coverage=1 00:29:59.599 --rc genhtml_legend=1 00:29:59.599 --rc geninfo_all_blocks=1 00:29:59.599 --rc geninfo_unexecuted_blocks=1 00:29:59.599 00:29:59.599 ' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.599 10:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.869 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.869 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.869 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:04.870 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:04.870 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:04.870 Found net devices under 0000:af:00.0: cvl_0_0 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:04.870 Found net devices under 0000:af:00.1: cvl_0_1 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.870 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.129 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.129 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.129 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.129 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.129 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.129 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.129 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.129 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:30:05.129 00:30:05.129 --- 10.0.0.2 ping statistics --- 00:30:05.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.130 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:30:05.130 00:30:05.130 --- 10.0.0.1 ping statistics --- 00:30:05.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.130 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1721346 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1721346 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1721346 ']' 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.130 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.388 [2024-12-12 10:43:39.169979] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:05.388 [2024-12-12 10:43:39.170992] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:30:05.388 [2024-12-12 10:43:39.171031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.388 [2024-12-12 10:43:39.252582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.388 [2024-12-12 10:43:39.293498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.388 [2024-12-12 10:43:39.293535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.388 [2024-12-12 10:43:39.293542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.388 [2024-12-12 10:43:39.293548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.388 [2024-12-12 10:43:39.293557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.388 [2024-12-12 10:43:39.294030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.388 [2024-12-12 10:43:39.361628] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:05.388 [2024-12-12 10:43:39.361835] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:05.388 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.388 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:05.388 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.388 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.388 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.647 [2024-12-12 10:43:39.434670] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.647 Malloc0 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.647 [2024-12-12 10:43:39.506834] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1721543 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1721543 /var/tmp/bdevperf.sock 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1721543 ']' 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:05.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.647 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:05.647 [2024-12-12 10:43:39.558355] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:30:05.647 [2024-12-12 10:43:39.558399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721543 ] 00:30:05.647 [2024-12-12 10:43:39.616132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.647 [2024-12-12 10:43:39.661845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.906 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.906 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:05.906 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.906 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.906 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:06.164 NVMe0n1 00:30:06.164 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.164 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:06.164 Running I/O for 10 seconds... 00:30:08.473 11757.00 IOPS, 45.93 MiB/s [2024-12-12T09:43:43.431Z] 12265.00 IOPS, 47.91 MiB/s [2024-12-12T09:43:44.366Z] 12294.67 IOPS, 48.03 MiB/s [2024-12-12T09:43:45.300Z] 12370.00 IOPS, 48.32 MiB/s [2024-12-12T09:43:46.235Z] 12493.60 IOPS, 48.80 MiB/s [2024-12-12T09:43:47.217Z] 12491.83 IOPS, 48.80 MiB/s [2024-12-12T09:43:48.221Z] 12524.29 IOPS, 48.92 MiB/s [2024-12-12T09:43:49.155Z] 12514.75 IOPS, 48.89 MiB/s [2024-12-12T09:43:50.532Z] 12517.22 IOPS, 48.90 MiB/s [2024-12-12T09:43:50.532Z] 12535.80 IOPS, 48.97 MiB/s 00:30:16.509 Latency(us) 00:30:16.509 [2024-12-12T09:43:50.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.509 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:16.509 Verification LBA range: start 0x0 length 0x4000 00:30:16.509 NVMe0n1 : 10.05 12560.19 49.06 0.00 0.00 81223.79 12607.88 54925.41 00:30:16.509 [2024-12-12T09:43:50.532Z] =================================================================================================================== 00:30:16.509 [2024-12-12T09:43:50.532Z] Total : 12560.19 49.06 0.00 0.00 81223.79 12607.88 54925.41 00:30:16.509 { 00:30:16.509 "results": [ 00:30:16.509 { 00:30:16.509 "job": "NVMe0n1", 00:30:16.509 "core_mask": "0x1", 00:30:16.509 "workload": "verify", 00:30:16.509 "status": "finished", 00:30:16.509 "verify_range": { 00:30:16.509 "start": 0, 00:30:16.509 "length": 16384 00:30:16.509 }, 00:30:16.509 "queue_depth": 1024, 00:30:16.509 "io_size": 4096, 00:30:16.509 "runtime": 10.053829, 00:30:16.509 "iops": 12560.189754570125, 00:30:16.509 "mibps": 49.06324122878955, 00:30:16.509 "io_failed": 0, 00:30:16.509 "io_timeout": 0, 00:30:16.509 "avg_latency_us": 81223.7943922668, 00:30:16.509 "min_latency_us": 12607.878095238095, 00:30:16.509 "max_latency_us": 54925.409523809525 00:30:16.509 } 00:30:16.509 ], 00:30:16.509 "core_count": 1 00:30:16.509 } 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1721543 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1721543 ']' 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1721543 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1721543 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1721543' 00:30:16.509 killing process with pid 1721543 00:30:16.509 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1721543 00:30:16.509 Received shutdown signal, test time was about 10.000000 seconds 00:30:16.509 00:30:16.509 Latency(us) 00:30:16.509 [2024-12-12T09:43:50.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.510 [2024-12-12T09:43:50.533Z] =================================================================================================================== 00:30:16.510 [2024-12-12T09:43:50.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1721543 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:16.510 rmmod nvme_tcp 00:30:16.510 rmmod nvme_fabrics 00:30:16.510 rmmod nvme_keyring 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1721346 ']' 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1721346 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1721346 ']' 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1721346 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1721346 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1721346' 00:30:16.510 killing process with pid 1721346 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1721346 00:30:16.510 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1721346 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.769 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.303 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:19.303 00:30:19.303 real 0m19.701s 00:30:19.303 user 0m22.865s 00:30:19.303 sys 0m6.241s 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:19.304 ************************************ 00:30:19.304 END TEST nvmf_queue_depth 00:30:19.304 ************************************ 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:19.304 ************************************ 00:30:19.304 START TEST nvmf_target_multipath 00:30:19.304 ************************************ 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:19.304 * Looking for test storage... 00:30:19.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:30:19.304 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:19.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.304 --rc genhtml_branch_coverage=1 00:30:19.304 --rc genhtml_function_coverage=1 00:30:19.304 --rc genhtml_legend=1 00:30:19.304 --rc geninfo_all_blocks=1 00:30:19.304 --rc geninfo_unexecuted_blocks=1 00:30:19.304 00:30:19.304 ' 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:19.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.304 --rc genhtml_branch_coverage=1 00:30:19.304 --rc genhtml_function_coverage=1 00:30:19.304 --rc genhtml_legend=1 00:30:19.304 --rc geninfo_all_blocks=1 00:30:19.304 --rc geninfo_unexecuted_blocks=1 00:30:19.304 00:30:19.304 ' 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:19.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.304 --rc genhtml_branch_coverage=1 00:30:19.304 --rc genhtml_function_coverage=1 00:30:19.304 --rc genhtml_legend=1 00:30:19.304 --rc geninfo_all_blocks=1 00:30:19.304 --rc geninfo_unexecuted_blocks=1 00:30:19.304 00:30:19.304 ' 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:19.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.304 --rc genhtml_branch_coverage=1 00:30:19.304 --rc genhtml_function_coverage=1 00:30:19.304 --rc genhtml_legend=1 00:30:19.304 --rc geninfo_all_blocks=1 00:30:19.304 --rc geninfo_unexecuted_blocks=1 00:30:19.304 00:30:19.304 ' 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.304 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:19.305 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:25.876 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:25.876 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:25.876 Found net devices under 0000:af:00.0: cvl_0_0 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:25.876 Found net devices under 0000:af:00.1: cvl_0_1 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.876 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:25.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:30:25.877 00:30:25.877 --- 10.0.0.2 ping statistics --- 00:30:25.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.877 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:30:25.877 00:30:25.877 --- 10.0.0.1 ping statistics --- 00:30:25.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.877 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:25.877 only one NIC for nvmf test 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.877 10:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.877 rmmod nvme_tcp 00:30:25.877 rmmod nvme_fabrics 00:30:25.877 rmmod nvme_keyring 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.877 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:27.254 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.255 00:30:27.255 real 0m8.279s 00:30:27.255 user 0m1.784s 00:30:27.255 sys 0m4.452s 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:27.255 ************************************ 00:30:27.255 END TEST nvmf_target_multipath 00:30:27.255 ************************************ 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:27.255 ************************************ 00:30:27.255 START TEST nvmf_zcopy 00:30:27.255 ************************************ 00:30:27.255 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:27.255 * Looking for test storage... 00:30:27.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.515 --rc genhtml_branch_coverage=1 00:30:27.515 --rc genhtml_function_coverage=1 00:30:27.515 --rc genhtml_legend=1 00:30:27.515 --rc geninfo_all_blocks=1 00:30:27.515 --rc geninfo_unexecuted_blocks=1 00:30:27.515 00:30:27.515 ' 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.515 --rc genhtml_branch_coverage=1 00:30:27.515 --rc genhtml_function_coverage=1 00:30:27.515 --rc genhtml_legend=1 00:30:27.515 --rc geninfo_all_blocks=1 00:30:27.515 --rc geninfo_unexecuted_blocks=1 00:30:27.515 00:30:27.515 ' 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.515 --rc genhtml_branch_coverage=1 00:30:27.515 --rc genhtml_function_coverage=1 00:30:27.515 --rc genhtml_legend=1 00:30:27.515 --rc geninfo_all_blocks=1 00:30:27.515 --rc geninfo_unexecuted_blocks=1 00:30:27.515 00:30:27.515 ' 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.515 --rc genhtml_branch_coverage=1 00:30:27.515 --rc genhtml_function_coverage=1 00:30:27.515 --rc genhtml_legend=1 00:30:27.515 --rc geninfo_all_blocks=1 00:30:27.515 --rc geninfo_unexecuted_blocks=1 00:30:27.515 00:30:27.515 ' 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.515 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.516 10:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:34.086 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:34.086 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.086 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:34.087 Found net devices under 0000:af:00.0: cvl_0_0 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:34.087 Found net devices under 0000:af:00.1: cvl_0_1 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.087 10:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:30:34.087 00:30:34.087 --- 10.0.0.2 ping statistics --- 00:30:34.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.087 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:30:34.087 00:30:34.087 --- 10.0.0.1 ping statistics --- 00:30:34.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.087 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1730064 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1730064 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1730064 ']' 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.087 [2024-12-12 10:44:07.304010] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:34.087 [2024-12-12 10:44:07.304880] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:30:34.087 [2024-12-12 10:44:07.304909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.087 [2024-12-12 10:44:07.381038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.087 [2024-12-12 10:44:07.421049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.087 [2024-12-12 10:44:07.421082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.087 [2024-12-12 10:44:07.421089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.087 [2024-12-12 10:44:07.421095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.087 [2024-12-12 10:44:07.421100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.087 [2024-12-12 10:44:07.421577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.087 [2024-12-12 10:44:07.488989] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:34.087 [2024-12-12 10:44:07.489193] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:34.087 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.088 [2024-12-12 10:44:07.562186] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.088 [2024-12-12 10:44:07.586385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.088 malloc0 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.088 { 00:30:34.088 "params": { 00:30:34.088 "name": "Nvme$subsystem", 00:30:34.088 "trtype": "$TEST_TRANSPORT", 00:30:34.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.088 "adrfam": "ipv4", 00:30:34.088 "trsvcid": "$NVMF_PORT", 00:30:34.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.088 "hdgst": ${hdgst:-false}, 00:30:34.088 "ddgst": ${ddgst:-false} 00:30:34.088 }, 00:30:34.088 "method": "bdev_nvme_attach_controller" 00:30:34.088 } 00:30:34.088 EOF 00:30:34.088 )") 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:34.088 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.088 "params": { 00:30:34.088 "name": "Nvme1", 00:30:34.088 "trtype": "tcp", 00:30:34.088 "traddr": "10.0.0.2", 00:30:34.088 "adrfam": "ipv4", 00:30:34.088 "trsvcid": "4420", 00:30:34.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.088 "hdgst": false, 00:30:34.088 "ddgst": false 00:30:34.088 }, 00:30:34.088 "method": "bdev_nvme_attach_controller" 00:30:34.088 }' 00:30:34.088 [2024-12-12 10:44:07.671767] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:30:34.088 [2024-12-12 10:44:07.671810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730086 ] 00:30:34.088 [2024-12-12 10:44:07.731663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.088 [2024-12-12 10:44:07.774959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.088 Running I/O for 10 seconds... 00:30:36.399 8624.00 IOPS, 67.38 MiB/s [2024-12-12T09:44:11.356Z] 8647.00 IOPS, 67.55 MiB/s [2024-12-12T09:44:12.290Z] 8656.67 IOPS, 67.63 MiB/s [2024-12-12T09:44:13.225Z] 8676.75 IOPS, 67.79 MiB/s [2024-12-12T09:44:14.160Z] 8688.40 IOPS, 67.88 MiB/s [2024-12-12T09:44:15.095Z] 8696.50 IOPS, 67.94 MiB/s [2024-12-12T09:44:16.469Z] 8702.71 IOPS, 67.99 MiB/s [2024-12-12T09:44:17.404Z] 8706.50 IOPS, 68.02 MiB/s [2024-12-12T09:44:18.339Z] 8703.11 IOPS, 67.99 MiB/s [2024-12-12T09:44:18.339Z] 8700.80 IOPS, 67.97 MiB/s 00:30:44.316 Latency(us) 00:30:44.316 [2024-12-12T09:44:18.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.316 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:44.316 Verification LBA range: start 0x0 length 0x1000 00:30:44.316 Nvme1n1 : 10.01 8703.15 67.99 0.00 0.00 14665.03 2715.06 20846.69 00:30:44.316 [2024-12-12T09:44:18.339Z] =================================================================================================================== 00:30:44.316 [2024-12-12T09:44:18.339Z] Total : 8703.15 67.99 0.00 0.00 14665.03 2715.06 20846.69 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1731856 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.316 { 00:30:44.316 "params": { 00:30:44.316 "name": "Nvme$subsystem", 00:30:44.316 "trtype": "$TEST_TRANSPORT", 00:30:44.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.316 "adrfam": "ipv4", 00:30:44.316 "trsvcid": "$NVMF_PORT", 00:30:44.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.316 "hdgst": ${hdgst:-false}, 00:30:44.316 "ddgst": ${ddgst:-false} 00:30:44.316 }, 00:30:44.316 "method": "bdev_nvme_attach_controller" 00:30:44.316 } 00:30:44.316 EOF 00:30:44.316 )") 00:30:44.316 [2024-12-12 10:44:18.249924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.316 [2024-12-12 10:44:18.249959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:44.316 10:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.316 "params": { 00:30:44.316 "name": "Nvme1", 00:30:44.316 "trtype": "tcp", 00:30:44.316 "traddr": "10.0.0.2", 00:30:44.316 "adrfam": "ipv4", 00:30:44.316 "trsvcid": "4420", 00:30:44.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.316 "hdgst": false, 00:30:44.316 "ddgst": false 00:30:44.316 }, 00:30:44.316 "method": "bdev_nvme_attach_controller" 00:30:44.316 }' 00:30:44.316 [2024-12-12 10:44:18.261897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.316 [2024-12-12 10:44:18.261913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.316 [2024-12-12 10:44:18.273882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.316 [2024-12-12 10:44:18.273893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.316 [2024-12-12 10:44:18.285877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.316 [2024-12-12 10:44:18.285887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.316 [2024-12-12 10:44:18.288592] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:30:44.316 [2024-12-12 10:44:18.288638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731856 ] 00:30:44.316 [2024-12-12 10:44:18.297878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.316 [2024-12-12 10:44:18.297891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.316 [2024-12-12 10:44:18.309877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.316 [2024-12-12 10:44:18.309888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.316 [2024-12-12 10:44:18.321894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.316 [2024-12-12 10:44:18.321904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.316 [2024-12-12 10:44:18.333880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.316 [2024-12-12 10:44:18.333891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.575 [2024-12-12 10:44:18.345883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.575 [2024-12-12 10:44:18.345893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.575 [2024-12-12 10:44:18.357879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.575 [2024-12-12 10:44:18.357888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.575 [2024-12-12 10:44:18.363462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.575 [2024-12-12 10:44:18.369879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.575 [2024-12-12 10:44:18.369891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.575 [2024-12-12 10:44:18.381881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.575 [2024-12-12 10:44:18.381893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.575 [2024-12-12 10:44:18.393880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.575 [2024-12-12 10:44:18.393891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.402260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.576 [2024-12-12 10:44:18.405879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.405891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.417892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.417910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.429886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.429904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.441901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.441922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.453882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.453894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.465884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.465898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.477881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.477894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.489895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.489921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.501888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.501905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.513886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.513902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.525886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.525901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.537881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.537893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.549878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.549888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.561879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.561889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.573900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.573915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.585879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.585889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.576 [2024-12-12 10:44:18.597879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.576 [2024-12-12 10:44:18.597890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.609877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.609887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.621884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.621900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.633886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.633903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 Running I/O for 5 seconds... 00:30:44.835 [2024-12-12 10:44:18.649560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.649592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.663750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.663769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.678313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.678333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.689798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.689818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.703844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.703869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.718474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.718493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.732886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.732909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.747677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.747699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.762212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.762230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.777849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.777868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.791354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.791372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.801616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.801635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.815306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.815324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.829583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.829600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.841984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.842002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.835 [2024-12-12 10:44:18.856020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.835 [2024-12-12 10:44:18.856038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.870607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.870625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.885541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.885559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.899023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.899041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.913715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.913733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.927926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.927944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.942471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.942488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.957541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.957560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.970521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.970539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.985742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.985763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:18.997412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:18.997430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:19.011635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:19.011654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:19.025999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:19.026017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:19.036456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:19.036474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:19.051196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:19.051214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:19.065897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:19.065915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:19.077105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:19.077122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:19.091631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:19.091649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.094 [2024-12-12 10:44:19.105923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.094 [2024-12-12 10:44:19.105941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.118840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.118859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.133697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.133716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.144935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.144955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.159947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.159966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.174918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.174937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.189726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.189745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.202866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.202885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.217852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.217871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.230344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.230362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.243383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.243401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.258077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.258096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.270223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.270241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.283719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.283737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.299167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.299185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.314272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.314290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.329542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.329560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.343924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.343944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.353 [2024-12-12 10:44:19.358523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.353 [2024-12-12 10:44:19.358541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.354 [2024-12-12 10:44:19.373356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.354 [2024-12-12 10:44:19.373374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.387371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.387390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.402254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.402272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.417341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.417360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.431601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.431620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.446260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.446278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.461677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.461696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.475409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.475428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.490455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.490474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.506243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.506261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.521687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.521705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.536010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.536028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.550830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.550848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.565473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.565492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.579951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.579968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.594863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.594881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.609155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.609173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.612 [2024-12-12 10:44:19.623613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.612 [2024-12-12 10:44:19.623632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.637929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.637947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 16841.00 IOPS, 131.57 MiB/s [2024-12-12T09:44:19.894Z] [2024-12-12 10:44:19.650401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.650419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.663928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.663947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.678273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.678290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.694288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.694306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.709602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.709621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.722727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.722744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.737617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.737636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.751096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.751114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.765685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.765703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.778775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.778792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.793438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.793460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.806922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.806940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.818147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.818164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.831407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.831425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.871 [2024-12-12 10:44:19.845973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.871 [2024-12-12 10:44:19.845992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.872 [2024-12-12 10:44:19.856804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.872 [2024-12-12 10:44:19.856822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.872 [2024-12-12 10:44:19.871383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.872 [2024-12-12 10:44:19.871401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.872 [2024-12-12 10:44:19.886555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.872 [2024-12-12 10:44:19.886579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:19.898079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:19.898096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:19.911390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:19.911407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:19.925648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:19.925675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:19.938391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:19.938409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:19.951590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:19.951608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:19.966338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:19.966356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:19.981484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:19.981502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:19.994814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:19.994832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.009835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.009856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.022831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.022850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.038414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.038433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.054080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.054102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.066367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.066387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.081507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.081525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.095608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.095627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.111169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.111188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.126104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.126122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.130 [2024-12-12 10:44:20.138287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.130 [2024-12-12 10:44:20.138304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.154042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.154061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.166607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.166625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.181697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.181715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.192411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.192430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.207704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.207722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.222070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.222087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.233144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.233163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.248023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.248041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.262888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.262907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.277871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.277890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.290370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.290388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.303760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.303779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.318674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.318697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.334080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.334098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.347844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.347862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.363125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.363144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.378009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.378027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.391114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.391132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.389 [2024-12-12 10:44:20.402155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.389 [2024-12-12 10:44:20.402173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.417590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.417609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.430648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.430666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.443213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.443231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.458812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.458831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.473503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.473521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.484897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.484916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.500113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.500131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.515311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.515330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.529930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.529950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.541457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.541477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.556214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.556234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.570970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.570990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.585804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.585829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.598081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.598101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.611950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.611969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.626913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.626931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 16775.50 IOPS, 131.06 MiB/s [2024-12-12T09:44:20.671Z] [2024-12-12 10:44:20.641789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.641807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.652464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.652482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.648 [2024-12-12 10:44:20.667724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.648 [2024-12-12 10:44:20.667743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.682416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.682434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.693893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.693911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.708103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.708122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.723341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.723360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.737712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.737730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.750564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.750588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.763300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.763320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.778052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.778071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.789207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.789226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.907 [2024-12-12 10:44:20.803469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.907 [2024-12-12 10:44:20.803488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.908 [2024-12-12 10:44:20.818170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.908 [2024-12-12 10:44:20.818189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.908 [2024-12-12 10:44:20.833731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.908 [2024-12-12 10:44:20.833750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.908 [2024-12-12 10:44:20.846993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.908 [2024-12-12 10:44:20.847012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.908 [2024-12-12 10:44:20.862514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.908 [2024-12-12 10:44:20.862534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.908 [2024-12-12 10:44:20.877395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.908 [2024-12-12 10:44:20.877414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.908 [2024-12-12 10:44:20.891924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.908 [2024-12-12 10:44:20.891942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.908 [2024-12-12 10:44:20.906358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.908 [2024-12-12 10:44:20.906376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.908 [2024-12-12 10:44:20.921530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.908 [2024-12-12 10:44:20.921549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.166 [2024-12-12 10:44:20.935986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.166 [2024-12-12 10:44:20.936005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:20.950477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:20.950496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:20.965434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:20.965452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:20.979855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:20.979873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:20.994734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:20.994752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.009611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.009630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.023967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.023985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.038670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.038690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.053452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.053471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.066833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.066851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.081650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.081669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.095692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.095710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.110547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.110565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.125342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.125360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.139697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.139716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.154116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.154135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.166258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.166276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.167 [2024-12-12 10:44:21.179608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.167 [2024-12-12 10:44:21.179628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.425 [2024-12-12 10:44:21.194178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.425 [2024-12-12 10:44:21.194196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.210245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.210262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.225523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.225541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.238882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.238899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.254368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.254385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.269151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.269170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.284056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.284075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.298895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.298913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.314548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.314566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.329182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.329200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.342546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.342564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.357997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.358015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.371298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.371316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.385809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.385832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.398464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.398482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.413736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.413754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.426926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.426944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.426 [2024-12-12 10:44:21.441484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.426 [2024-12-12 10:44:21.441502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.455290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.455309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.470320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.470338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.485657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.485685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.499678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.499696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.514330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.514347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.529329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.529348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.543429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.543447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.558328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.558346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.573384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.573403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.587189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.587207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.597389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.597407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.611421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.611438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.626093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.626113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.638428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.638446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 16795.67 IOPS, 131.22 MiB/s [2024-12-12T09:44:21.708Z] [2024-12-12 10:44:21.654246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.654268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.665392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.665410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.679758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.679786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.685 [2024-12-12 10:44:21.694516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.685 [2024-12-12 10:44:21.694536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.710167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.710184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.725521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.725540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.739404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.739422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.754397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.754415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.770481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.770498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.785381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.785400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.799712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.799730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.814387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.814405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.830043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.830061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.842498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.842516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.857885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.857904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.870533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.870550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.883212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.883231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.894265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.894282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.907609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.907627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.922088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.922110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.934295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.934313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.949492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.949511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.944 [2024-12-12 10:44:21.963651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.944 [2024-12-12 10:44:21.963671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:21.978506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:21.978525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:21.993661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:21.993681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.007395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.007414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.021846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.021865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.034517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.034535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.047837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.047856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.062525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.062544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.077972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.077991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.089407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.089426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.103878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.103897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.118021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.118040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.129073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.129091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.143268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.143287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.158575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.158593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.173466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.173485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.186875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.186894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.201155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.201174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-12-12 10:44:22.215563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-12-12 10:44:22.215588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.461 [2024-12-12 10:44:22.230321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.230339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.245093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.245112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.259917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.259935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.273893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.273911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.287116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.287134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.301868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.301886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.312697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.312716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.327175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.327193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.341762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.341781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.354720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.354738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.367904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.367923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.382566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.382591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.397827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.397845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.411910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.411928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.426610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.426628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.441925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.441944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.454729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.454747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.469501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.469519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.462 [2024-12-12 10:44:22.480546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.462 [2024-12-12 10:44:22.480564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.495209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.495227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.509366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.509384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.523058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.523076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.533025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.533043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.547701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.547720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.562549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.562567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.577697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.577715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.591816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.591834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.606038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.606056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.618054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.618072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.631337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.631355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.646276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.646294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 16828.50 IOPS, 131.47 MiB/s [2024-12-12T09:44:22.744Z] [2024-12-12 10:44:22.661899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.661918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.674263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.674280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.689781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.689799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.702868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.702885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.718061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.718079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.721 [2024-12-12 10:44:22.730652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.721 [2024-12-12 10:44:22.730669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.745624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.745643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.756521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.756539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.771193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.771210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.785345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.785364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.799584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.799602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.814409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.814426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.830235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.830258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.842865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.842883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.854684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.854701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.869879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.869897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.882725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.882743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.897710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.897728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.910504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.910522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.923335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.923353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.938037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.938063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.949833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.949851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.963714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.963737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.978162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.978181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.980 [2024-12-12 10:44:22.991559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.980 [2024-12-12 10:44:22.991583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.006287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.006308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.021215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.021233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.036012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.036031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.050826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.050844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.066081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.066100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.078389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.078408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.091498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.091517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.106013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.106031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.118276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.118293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.131925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.131943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.146439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.146456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.161829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.161847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.174502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.174520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.187832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.187850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.202413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.202431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.217301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.217319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.231387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.231410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.246320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.246337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.239 [2024-12-12 10:44:23.261888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.239 [2024-12-12 10:44:23.261907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.498 [2024-12-12 10:44:23.274907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.498 [2024-12-12 10:44:23.274926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.498 [2024-12-12 10:44:23.287287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.498 [2024-12-12 10:44:23.287305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.498 [2024-12-12 10:44:23.301987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.498 [2024-12-12 10:44:23.302005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.498 [2024-12-12 10:44:23.313859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.498 [2024-12-12 10:44:23.313877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.498 [2024-12-12 10:44:23.327324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.498 [2024-12-12 10:44:23.327342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.342091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.342109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.353045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.353063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.367491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.367509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.381864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.381883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.395754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.395772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.410561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.410588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.426043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.426062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.439670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.439688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.454816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.454834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.469733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.469752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.482826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.482845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.497830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.497857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.499 [2024-12-12 10:44:23.510499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.499 [2024-12-12 10:44:23.510517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.523902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.523921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.538560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.538586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.552968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.552986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.566960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.566979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.579535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.579553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.593930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.593948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.605250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.605269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.620290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.620308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.634729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.634747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 16842.20 IOPS, 131.58 MiB/s [2024-12-12T09:44:23.781Z] [2024-12-12 10:44:23.649609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.649628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 00:30:49.758 Latency(us) 00:30:49.758 [2024-12-12T09:44:23.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.758 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:49.758 Nvme1n1 : 5.01 16844.34 131.60 0.00 0.00 7591.89 2075.31 13232.03 00:30:49.758 [2024-12-12T09:44:23.781Z] =================================================================================================================== 00:30:49.758 [2024-12-12T09:44:23.781Z] Total : 16844.34 131.60 0.00 0.00 7591.89 2075.31 13232.03 00:30:49.758 [2024-12-12 10:44:23.657889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.657906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.669895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.669912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.681893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.681912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.693887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.693905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.705885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.705900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.717884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.717898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.729883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.729896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.741881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.741894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.753881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.753894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.765879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.765891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.758 [2024-12-12 10:44:23.777888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.758 [2024-12-12 10:44:23.777900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.017 [2024-12-12 10:44:23.789882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.017 [2024-12-12 10:44:23.789895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.017 [2024-12-12 10:44:23.801878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.017 [2024-12-12 10:44:23.801888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.017 [2024-12-12 10:44:23.813879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.017 [2024-12-12 10:44:23.813888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1731856) - No such process 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1731856 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.017 delay0 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.017 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:50.017 [2024-12-12 10:44:23.963722] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:58.133 Initializing NVMe Controllers 00:30:58.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.133 Initialization complete. Launching workers. 00:30:58.133 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 7002 00:30:58.133 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7274, failed to submit 48 00:30:58.133 success 7110, unsuccessful 164, failed 0 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:58.133 rmmod nvme_tcp 00:30:58.133 rmmod nvme_fabrics 00:30:58.133 rmmod nvme_keyring 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1730064 ']' 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1730064 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1730064 ']' 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1730064 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1730064 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1730064' 00:30:58.133 killing process with pid 1730064 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1730064 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1730064 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.133 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.511 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.511 00:30:59.511 real 0m32.229s 00:30:59.511 user 0m42.058s 00:30:59.511 sys 0m12.643s 00:30:59.511 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:59.511 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.511 ************************************ 00:30:59.511 END TEST nvmf_zcopy 00:30:59.511 ************************************ 00:30:59.511 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:59.511 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:59.511 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:59.511 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:59.511 ************************************ 00:30:59.511 START TEST nvmf_nmic 00:30:59.511 ************************************ 00:30:59.511 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:59.771 * Looking for test storage... 00:30:59.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:59.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.771 --rc genhtml_branch_coverage=1 00:30:59.771 --rc genhtml_function_coverage=1 00:30:59.771 --rc genhtml_legend=1 00:30:59.771 --rc geninfo_all_blocks=1 00:30:59.771 --rc geninfo_unexecuted_blocks=1 00:30:59.771 00:30:59.771 ' 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:59.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.771 --rc genhtml_branch_coverage=1 00:30:59.771 --rc genhtml_function_coverage=1 00:30:59.771 --rc genhtml_legend=1 00:30:59.771 --rc geninfo_all_blocks=1 00:30:59.771 --rc geninfo_unexecuted_blocks=1 00:30:59.771 00:30:59.771 ' 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:59.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.771 --rc genhtml_branch_coverage=1 00:30:59.771 --rc genhtml_function_coverage=1 00:30:59.771 --rc genhtml_legend=1 00:30:59.771 --rc geninfo_all_blocks=1 00:30:59.771 --rc geninfo_unexecuted_blocks=1 00:30:59.771 00:30:59.771 ' 00:30:59.771 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:59.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.772 --rc genhtml_branch_coverage=1 00:30:59.772 --rc genhtml_function_coverage=1 00:30:59.772 --rc genhtml_legend=1 00:30:59.772 --rc geninfo_all_blocks=1 00:30:59.772 --rc geninfo_unexecuted_blocks=1 00:30:59.772 00:30:59.772 ' 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.772 10:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.347 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:06.348 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:06.348 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:06.348 Found net devices under 0000:af:00.0: cvl_0_0 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:06.348 Found net devices under 0000:af:00.1: cvl_0_1 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:06.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:31:06.348 00:31:06.348 --- 10.0.0.2 ping statistics --- 00:31:06.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.348 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:31:06.348 00:31:06.348 --- 10.0.0.1 ping statistics --- 00:31:06.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.348 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.348 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1737322 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1737322 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1737322 ']' 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 [2024-12-12 10:44:39.631102] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:06.349 [2024-12-12 10:44:39.632042] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:31:06.349 [2024-12-12 10:44:39.632079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.349 [2024-12-12 10:44:39.711166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:06.349 [2024-12-12 10:44:39.755103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.349 [2024-12-12 10:44:39.755138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.349 [2024-12-12 10:44:39.755147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.349 [2024-12-12 10:44:39.755153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.349 [2024-12-12 10:44:39.755158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.349 [2024-12-12 10:44:39.756452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.349 [2024-12-12 10:44:39.756560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.349 [2024-12-12 10:44:39.756670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.349 [2024-12-12 10:44:39.756671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:06.349 [2024-12-12 10:44:39.826083] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:06.349 [2024-12-12 10:44:39.826463] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:06.349 [2024-12-12 10:44:39.827051] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:06.349 [2024-12-12 10:44:39.827439] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:06.349 [2024-12-12 10:44:39.827478] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 [2024-12-12 10:44:39.901401] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 Malloc0 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 [2024-12-12 10:44:39.977707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:06.349 test case1: single bdev can't be used in multiple subsystems 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.349 10:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 [2024-12-12 10:44:40.009168] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:06.349 [2024-12-12 10:44:40.009190] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:06.349 [2024-12-12 10:44:40.009199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.349 request: 00:31:06.349 { 00:31:06.349 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:06.349 "namespace": { 00:31:06.349 "bdev_name": "Malloc0", 00:31:06.349 "no_auto_visible": false, 00:31:06.349 "hide_metadata": false 00:31:06.349 }, 00:31:06.349 "method": "nvmf_subsystem_add_ns", 00:31:06.349 "req_id": 1 00:31:06.349 } 00:31:06.349 Got JSON-RPC error response 00:31:06.349 response: 00:31:06.349 { 00:31:06.349 "code": -32602, 00:31:06.349 "message": "Invalid parameters" 00:31:06.349 } 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:06.349 Adding namespace failed - expected result. 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:06.349 test case2: host connect to nvmf target in multiple paths 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.349 [2024-12-12 10:44:40.021244] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:06.349 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:06.608 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:06.608 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:06.608 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:06.608 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:06.608 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:08.511 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:08.511 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:08.511 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:08.511 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:08.511 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:08.511 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:08.511 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:08.511 [global] 00:31:08.511 thread=1 00:31:08.511 invalidate=1 00:31:08.511 rw=write 00:31:08.511 time_based=1 00:31:08.511 runtime=1 00:31:08.511 ioengine=libaio 00:31:08.511 direct=1 00:31:08.511 bs=4096 00:31:08.511 iodepth=1 00:31:08.511 norandommap=0 00:31:08.511 numjobs=1 00:31:08.511 00:31:08.511 verify_dump=1 00:31:08.511 verify_backlog=512 00:31:08.511 verify_state_save=0 00:31:08.511 do_verify=1 00:31:08.511 verify=crc32c-intel 00:31:08.511 [job0] 00:31:08.511 filename=/dev/nvme0n1 00:31:08.511 Could not set queue depth (nvme0n1) 00:31:08.770 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:08.770 fio-3.35 00:31:08.770 Starting 1 thread 00:31:10.147 00:31:10.147 job0: (groupid=0, jobs=1): err= 0: pid=1737927: Thu Dec 12 10:44:43 2024 00:31:10.147 read: IOPS=2045, BW=8183KiB/s (8380kB/s)(8208KiB/1003msec) 00:31:10.147 slat (nsec): min=6336, max=28502, avg=7429.24, stdev=1175.22 00:31:10.147 clat (usec): min=173, max=41166, avg=280.73, stdev=1797.84 00:31:10.147 lat (usec): min=180, max=41179, avg=288.15, stdev=1798.13 00:31:10.147 clat percentiles (usec): 00:31:10.147 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 182], 00:31:10.147 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 190], 00:31:10.147 | 70.00th=[ 196], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 251], 00:31:10.147 | 99.00th=[ 260], 99.50th=[ 262], 99.90th=[41157], 99.95th=[41157], 00:31:10.147 | 99.99th=[41157] 00:31:10.147 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:31:10.147 slat (usec): min=9, max=27450, avg=21.75, stdev=542.33 00:31:10.147 clat (usec): min=116, max=368, avg=134.55, stdev= 9.97 00:31:10.147 lat (usec): min=132, max=27750, avg=156.31, stdev=545.69 00:31:10.147 clat percentiles (usec): 00:31:10.147 | 1.00th=[ 126], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 130], 00:31:10.147 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 135], 00:31:10.147 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 147], 00:31:10.147 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 241], 99.95th=[ 302], 00:31:10.147 | 99.99th=[ 367] 00:31:10.147 bw ( KiB/s): min= 8192, max=12288, per=100.00%, avg=10240.00, stdev=2896.31, samples=2 00:31:10.147 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:31:10.147 lat (usec) : 250=96.62%, 500=3.30% 00:31:10.147 lat (msec) : 50=0.09% 00:31:10.147 cpu : usr=2.99%, sys=4.19%, ctx=4616, majf=0, minf=1 00:31:10.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.147 issued rwts: total=2052,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:10.147 00:31:10.147 Run status group 0 (all jobs): 00:31:10.147 READ: bw=8183KiB/s (8380kB/s), 8183KiB/s-8183KiB/s (8380kB/s-8380kB/s), io=8208KiB (8405kB), run=1003-1003msec 00:31:10.147 WRITE: bw=9.97MiB/s (10.5MB/s), 9.97MiB/s-9.97MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1003-1003msec 00:31:10.147 00:31:10.147 Disk stats (read/write): 00:31:10.147 nvme0n1: ios=2076/2560, merge=0/0, ticks=1422/343, in_queue=1765, util=98.40% 00:31:10.147 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:10.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.147 rmmod nvme_tcp 00:31:10.147 rmmod nvme_fabrics 00:31:10.147 rmmod nvme_keyring 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1737322 ']' 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1737322 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1737322 ']' 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1737322 00:31:10.147 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737322 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737322' 00:31:10.407 killing process with pid 1737322 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1737322 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1737322 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.407 10:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:13.012 00:31:13.012 real 0m12.987s 00:31:13.012 user 0m24.330s 00:31:13.012 sys 0m5.974s 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:13.012 ************************************ 00:31:13.012 END TEST nvmf_nmic 00:31:13.012 ************************************ 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:13.012 ************************************ 00:31:13.012 START TEST nvmf_fio_target 00:31:13.012 ************************************ 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:13.012 * Looking for test storage... 00:31:13.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:13.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.012 --rc genhtml_branch_coverage=1 00:31:13.012 --rc genhtml_function_coverage=1 00:31:13.012 --rc genhtml_legend=1 00:31:13.012 --rc geninfo_all_blocks=1 00:31:13.012 --rc geninfo_unexecuted_blocks=1 00:31:13.012 00:31:13.012 ' 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:13.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.012 --rc genhtml_branch_coverage=1 00:31:13.012 --rc genhtml_function_coverage=1 00:31:13.012 --rc genhtml_legend=1 00:31:13.012 --rc geninfo_all_blocks=1 00:31:13.012 --rc geninfo_unexecuted_blocks=1 00:31:13.012 00:31:13.012 ' 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:13.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.012 --rc genhtml_branch_coverage=1 00:31:13.012 --rc genhtml_function_coverage=1 00:31:13.012 --rc genhtml_legend=1 00:31:13.012 --rc geninfo_all_blocks=1 00:31:13.012 --rc geninfo_unexecuted_blocks=1 00:31:13.012 00:31:13.012 ' 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:13.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.012 --rc genhtml_branch_coverage=1 00:31:13.012 --rc genhtml_function_coverage=1 00:31:13.012 --rc genhtml_legend=1 00:31:13.012 --rc geninfo_all_blocks=1 00:31:13.012 --rc geninfo_unexecuted_blocks=1 00:31:13.012 00:31:13.012 ' 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.012 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.013 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:18.285 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:18.286 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:18.286 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.286 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:18.546 Found net devices under 0000:af:00.0: cvl_0_0 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:18.546 Found net devices under 0000:af:00.1: cvl_0_1 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:18.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:18.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:31:18.546 00:31:18.546 --- 10.0.0.2 ping statistics --- 00:31:18.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.546 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:18.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:18.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:31:18.546 00:31:18.546 --- 10.0.0.1 ping statistics --- 00:31:18.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.546 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:18.546 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1741619 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1741619 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1741619 ']' 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:18.805 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:18.805 [2024-12-12 10:44:52.652210] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:18.805 [2024-12-12 10:44:52.653146] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:31:18.805 [2024-12-12 10:44:52.653183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.805 [2024-12-12 10:44:52.731670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:18.805 [2024-12-12 10:44:52.773728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.805 [2024-12-12 10:44:52.773763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.805 [2024-12-12 10:44:52.773770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.805 [2024-12-12 10:44:52.773779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.805 [2024-12-12 10:44:52.773784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.805 [2024-12-12 10:44:52.775245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.805 [2024-12-12 10:44:52.775353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.805 [2024-12-12 10:44:52.775462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.805 [2024-12-12 10:44:52.775464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.063 [2024-12-12 10:44:52.845437] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:19.063 [2024-12-12 10:44:52.846292] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:19.063 [2024-12-12 10:44:52.846456] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:19.063 [2024-12-12 10:44:52.846978] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:19.063 [2024-12-12 10:44:52.847029] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:19.063 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.063 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:19.063 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:19.063 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:19.063 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.063 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.063 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:19.321 [2024-12-12 10:44:53.088124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.321 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.580 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:19.580 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.580 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:19.580 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.839 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:19.839 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:20.097 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:20.097 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:20.355 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:20.613 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:20.613 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:20.613 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:20.613 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:20.871 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:20.871 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:21.129 10:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:21.386 10:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:21.386 10:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:21.386 10:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:21.386 10:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:21.643 10:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.900 [2024-12-12 10:44:55.716030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.900 10:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:22.158 10:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:22.158 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:22.415 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:22.415 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:22.415 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:22.415 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:22.415 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:22.415 10:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:24.940 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:24.940 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:24.940 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:24.940 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:24.940 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:24.940 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:24.940 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:24.940 [global] 00:31:24.940 thread=1 00:31:24.940 invalidate=1 00:31:24.940 rw=write 00:31:24.940 time_based=1 00:31:24.940 runtime=1 00:31:24.940 ioengine=libaio 00:31:24.940 direct=1 00:31:24.940 bs=4096 00:31:24.940 iodepth=1 00:31:24.940 norandommap=0 00:31:24.940 numjobs=1 00:31:24.940 00:31:24.940 verify_dump=1 00:31:24.940 verify_backlog=512 00:31:24.940 verify_state_save=0 00:31:24.940 do_verify=1 00:31:24.940 verify=crc32c-intel 00:31:24.940 [job0] 00:31:24.940 filename=/dev/nvme0n1 00:31:24.940 [job1] 00:31:24.940 filename=/dev/nvme0n2 00:31:24.940 [job2] 00:31:24.940 filename=/dev/nvme0n3 00:31:24.940 [job3] 00:31:24.940 filename=/dev/nvme0n4 00:31:24.940 Could not set queue depth (nvme0n1) 00:31:24.940 Could not set queue depth (nvme0n2) 00:31:24.940 Could not set queue depth (nvme0n3) 00:31:24.940 Could not set queue depth (nvme0n4) 00:31:24.940 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.940 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.940 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.940 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.940 fio-3.35 00:31:24.940 Starting 4 threads 00:31:26.312 00:31:26.312 job0: (groupid=0, jobs=1): err= 0: pid=1742716: Thu Dec 12 10:44:59 2024 00:31:26.312 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:31:26.312 slat (nsec): min=10013, max=26114, avg=23902.05, stdev=3135.12 00:31:26.312 clat (usec): min=40590, max=41101, avg=40954.19, stdev=104.07 00:31:26.312 lat (usec): min=40600, max=41125, avg=40978.10, stdev=106.46 00:31:26.312 clat percentiles (usec): 00:31:26.312 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:26.312 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:26.312 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:26.312 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:26.312 | 99.99th=[41157] 00:31:26.312 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:31:26.312 slat (nsec): min=10540, max=49670, avg=11890.37, stdev=2124.56 00:31:26.312 clat (usec): min=142, max=325, avg=184.67, stdev=13.84 00:31:26.312 lat (usec): min=157, max=338, avg=196.56, stdev=14.23 00:31:26.312 clat percentiles (usec): 00:31:26.312 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 178], 00:31:26.312 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:31:26.312 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 196], 95.00th=[ 202], 00:31:26.312 | 99.00th=[ 219], 99.50th=[ 251], 99.90th=[ 326], 99.95th=[ 326], 00:31:26.312 | 99.99th=[ 326] 00:31:26.312 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:31:26.312 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:26.312 lat (usec) : 250=95.32%, 500=0.56% 00:31:26.312 lat (msec) : 50=4.12% 00:31:26.312 cpu : usr=0.20%, sys=1.09%, ctx=536, majf=0, minf=1 00:31:26.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.312 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.312 job1: (groupid=0, jobs=1): err= 0: pid=1742717: Thu Dec 12 10:44:59 2024 00:31:26.312 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:31:26.312 slat (nsec): min=9454, max=23738, avg=22567.82, stdev=2942.51 00:31:26.312 clat (usec): min=40594, max=41061, avg=40953.54, stdev=90.09 00:31:26.312 lat (usec): min=40604, max=41085, avg=40976.11, stdev=92.75 00:31:26.312 clat percentiles (usec): 00:31:26.312 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:26.312 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:26.312 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:26.312 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:26.312 | 99.99th=[41157] 00:31:26.312 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:31:26.312 slat (nsec): min=9492, max=37475, avg=10842.08, stdev=1943.79 00:31:26.312 clat (usec): min=143, max=427, avg=186.37, stdev=17.38 00:31:26.312 lat (usec): min=157, max=464, avg=197.22, stdev=18.54 00:31:26.312 clat percentiles (usec): 00:31:26.312 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 180], 00:31:26.312 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:31:26.312 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 198], 95.00th=[ 200], 00:31:26.312 | 99.00th=[ 225], 99.50th=[ 241], 99.90th=[ 429], 99.95th=[ 429], 00:31:26.312 | 99.99th=[ 429] 00:31:26.312 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:31:26.312 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:26.312 lat (usec) : 250=95.51%, 500=0.37% 00:31:26.312 lat (msec) : 50=4.12% 00:31:26.312 cpu : usr=0.20%, sys=0.60%, ctx=535, majf=0, minf=1 00:31:26.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.312 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.312 job2: (groupid=0, jobs=1): err= 0: pid=1742719: Thu Dec 12 10:44:59 2024 00:31:26.312 read: IOPS=1178, BW=4715KiB/s (4828kB/s)(4908KiB/1041msec) 00:31:26.312 slat (nsec): min=7326, max=41330, avg=8448.02, stdev=2156.50 00:31:26.312 clat (usec): min=179, max=41017, avg=612.63, stdev=4011.14 00:31:26.312 lat (usec): min=199, max=41040, avg=621.08, stdev=4012.50 00:31:26.312 clat percentiles (usec): 00:31:26.312 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:31:26.312 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 212], 00:31:26.312 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 245], 95.00th=[ 249], 00:31:26.312 | 99.00th=[ 338], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:26.312 | 99.99th=[41157] 00:31:26.312 write: IOPS=1475, BW=5902KiB/s (6044kB/s)(6144KiB/1041msec); 0 zone resets 00:31:26.312 slat (nsec): min=10760, max=40352, avg=12179.35, stdev=1905.55 00:31:26.312 clat (usec): min=119, max=318, avg=161.76, stdev=33.55 00:31:26.312 lat (usec): min=140, max=329, avg=173.94, stdev=34.10 00:31:26.312 clat percentiles (usec): 00:31:26.312 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:31:26.312 | 30.00th=[ 141], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:31:26.312 | 70.00th=[ 169], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 239], 00:31:26.312 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 318], 00:31:26.312 | 99.99th=[ 318] 00:31:26.312 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:26.312 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:26.312 lat (usec) : 250=97.43%, 500=2.14% 00:31:26.312 lat (msec) : 50=0.43% 00:31:26.312 cpu : usr=2.60%, sys=3.94%, ctx=2764, majf=0, minf=1 00:31:26.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.312 issued rwts: total=1227,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.312 job3: (groupid=0, jobs=1): err= 0: pid=1742720: Thu Dec 12 10:44:59 2024 00:31:26.312 read: IOPS=22, BW=90.8KiB/s (93.0kB/s)(92.0KiB/1013msec) 00:31:26.312 slat (nsec): min=7829, max=23785, avg=21695.30, stdev=4065.23 00:31:26.312 clat (usec): min=264, max=41141, avg=39211.07, stdev=8490.34 00:31:26.312 lat (usec): min=274, max=41164, avg=39232.76, stdev=8492.87 00:31:26.312 clat percentiles (usec): 00:31:26.312 | 1.00th=[ 265], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:26.312 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:26.312 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:26.312 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:26.312 | 99.99th=[41157] 00:31:26.312 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:31:26.312 slat (nsec): min=9563, max=45100, avg=10843.59, stdev=2113.70 00:31:26.312 clat (usec): min=137, max=350, avg=202.41, stdev=25.89 00:31:26.312 lat (usec): min=147, max=395, avg=213.26, stdev=26.33 00:31:26.312 clat percentiles (usec): 00:31:26.312 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 167], 20.00th=[ 192], 00:31:26.312 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:31:26.312 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 245], 00:31:26.312 | 99.00th=[ 258], 99.50th=[ 318], 99.90th=[ 351], 99.95th=[ 351], 00:31:26.312 | 99.99th=[ 351] 00:31:26.312 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:31:26.312 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:26.312 lat (usec) : 250=92.90%, 500=2.99% 00:31:26.312 lat (msec) : 50=4.11% 00:31:26.312 cpu : usr=0.30%, sys=0.49%, ctx=535, majf=0, minf=2 00:31:26.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.312 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.312 00:31:26.312 Run status group 0 (all jobs): 00:31:26.312 READ: bw=4972KiB/s (5091kB/s), 87.4KiB/s-4715KiB/s (89.5kB/s-4828kB/s), io=5176KiB (5300kB), run=1007-1041msec 00:31:26.312 WRITE: bw=11.5MiB/s (12.1MB/s), 2022KiB/s-5902KiB/s (2070kB/s-6044kB/s), io=12.0MiB (12.6MB), run=1007-1041msec 00:31:26.312 00:31:26.312 Disk stats (read/write): 00:31:26.312 nvme0n1: ios=44/512, merge=0/0, ticks=1723/95, in_queue=1818, util=97.60% 00:31:26.312 nvme0n2: ios=42/512, merge=0/0, ticks=1723/95, in_queue=1818, util=97.86% 00:31:26.312 nvme0n3: ios=1280/1536, merge=0/0, ticks=1247/230, in_queue=1477, util=97.70% 00:31:26.312 nvme0n4: ios=19/512, merge=0/0, ticks=739/95, in_queue=834, util=89.68% 00:31:26.312 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:26.312 [global] 00:31:26.312 thread=1 00:31:26.312 invalidate=1 00:31:26.312 rw=randwrite 00:31:26.312 time_based=1 00:31:26.312 runtime=1 00:31:26.312 ioengine=libaio 00:31:26.312 direct=1 00:31:26.313 bs=4096 00:31:26.313 iodepth=1 00:31:26.313 norandommap=0 00:31:26.313 numjobs=1 00:31:26.313 00:31:26.313 verify_dump=1 00:31:26.313 verify_backlog=512 00:31:26.313 verify_state_save=0 00:31:26.313 do_verify=1 00:31:26.313 verify=crc32c-intel 00:31:26.313 [job0] 00:31:26.313 filename=/dev/nvme0n1 00:31:26.313 [job1] 00:31:26.313 filename=/dev/nvme0n2 00:31:26.313 [job2] 00:31:26.313 filename=/dev/nvme0n3 00:31:26.313 [job3] 00:31:26.313 filename=/dev/nvme0n4 00:31:26.313 Could not set queue depth (nvme0n1) 00:31:26.313 Could not set queue depth (nvme0n2) 00:31:26.313 Could not set queue depth (nvme0n3) 00:31:26.313 Could not set queue depth (nvme0n4) 00:31:26.313 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.313 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.313 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.313 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.313 fio-3.35 00:31:26.313 Starting 4 threads 00:31:27.684 00:31:27.684 job0: (groupid=0, jobs=1): err= 0: pid=1743106: Thu Dec 12 10:45:01 2024 00:31:27.684 read: IOPS=316, BW=1268KiB/s (1298kB/s)(1284KiB/1013msec) 00:31:27.684 slat (nsec): min=6421, max=26894, avg=8468.91, stdev=3882.09 00:31:27.684 clat (usec): min=182, max=41987, avg=2766.71, stdev=9888.86 00:31:27.684 lat (usec): min=190, max=41996, avg=2775.17, stdev=9891.75 00:31:27.684 clat percentiles (usec): 00:31:27.684 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 210], 00:31:27.684 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:31:27.684 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 251], 95.00th=[41157], 00:31:27.684 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:31:27.684 | 99.99th=[42206] 00:31:27.684 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:31:27.684 slat (nsec): min=9618, max=44203, avg=10854.14, stdev=1933.65 00:31:27.684 clat (usec): min=125, max=329, avg=210.52, stdev=24.64 00:31:27.684 lat (usec): min=135, max=339, avg=221.38, stdev=24.75 00:31:27.684 clat percentiles (usec): 00:31:27.684 | 1.00th=[ 137], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 198], 00:31:27.684 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:31:27.685 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 241], 00:31:27.685 | 99.00th=[ 255], 99.50th=[ 260], 99.90th=[ 330], 99.95th=[ 330], 00:31:27.685 | 99.99th=[ 330] 00:31:27.685 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:31:27.685 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:27.685 lat (usec) : 250=94.84%, 500=2.76% 00:31:27.685 lat (msec) : 50=2.40% 00:31:27.685 cpu : usr=0.49%, sys=0.79%, ctx=835, majf=0, minf=1 00:31:27.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.685 issued rwts: total=321,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.685 job1: (groupid=0, jobs=1): err= 0: pid=1743107: Thu Dec 12 10:45:01 2024 00:31:27.685 read: IOPS=580, BW=2322KiB/s (2377kB/s)(2324KiB/1001msec) 00:31:27.685 slat (nsec): min=6602, max=26304, avg=9254.19, stdev=2480.14 00:31:27.685 clat (usec): min=186, max=43055, avg=1370.34, stdev=6698.35 00:31:27.685 lat (usec): min=193, max=43069, avg=1379.60, stdev=6700.25 00:31:27.685 clat percentiles (usec): 00:31:27.685 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 239], 00:31:27.685 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:31:27.685 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 260], 00:31:27.685 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:31:27.685 | 99.99th=[43254] 00:31:27.685 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:31:27.685 slat (usec): min=9, max=250, avg=12.16, stdev= 7.75 00:31:27.685 clat (usec): min=117, max=315, avg=174.36, stdev=42.34 00:31:27.685 lat (usec): min=132, max=400, avg=186.52, stdev=43.50 00:31:27.685 clat percentiles (usec): 00:31:27.685 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 133], 00:31:27.685 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 163], 60.00th=[ 200], 00:31:27.685 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 239], 00:31:27.685 | 99.00th=[ 260], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 318], 00:31:27.685 | 99.99th=[ 318] 00:31:27.685 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:31:27.685 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:27.685 lat (usec) : 250=91.84%, 500=7.17% 00:31:27.685 lat (msec) : 50=1.00% 00:31:27.685 cpu : usr=1.00%, sys=2.70%, ctx=1607, majf=0, minf=1 00:31:27.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.685 issued rwts: total=581,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.685 job2: (groupid=0, jobs=1): err= 0: pid=1743114: Thu Dec 12 10:45:01 2024 00:31:27.685 read: IOPS=1411, BW=5645KiB/s (5780kB/s)(5848KiB/1036msec) 00:31:27.685 slat (nsec): min=8505, max=43664, avg=9959.96, stdev=1926.63 00:31:27.685 clat (usec): min=173, max=41099, avg=513.35, stdev=3358.39 00:31:27.685 lat (usec): min=193, max=41115, avg=523.31, stdev=3359.19 00:31:27.685 clat percentiles (usec): 00:31:27.685 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 206], 00:31:27.685 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 243], 00:31:27.685 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 255], 00:31:27.685 | 99.00th=[ 445], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:27.685 | 99.99th=[41157] 00:31:27.685 write: IOPS=1482, BW=5931KiB/s (6073kB/s)(6144KiB/1036msec); 0 zone resets 00:31:27.685 slat (nsec): min=11678, max=49024, avg=13314.84, stdev=2507.75 00:31:27.685 clat (usec): min=124, max=347, avg=152.30, stdev=19.52 00:31:27.685 lat (usec): min=137, max=396, avg=165.61, stdev=20.33 00:31:27.685 clat percentiles (usec): 00:31:27.685 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:31:27.685 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:31:27.685 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:31:27.685 | 99.00th=[ 194], 99.50th=[ 208], 99.90th=[ 249], 99.95th=[ 347], 00:31:27.685 | 99.99th=[ 347] 00:31:27.685 bw ( KiB/s): min= 4096, max= 8192, per=44.40%, avg=6144.00, stdev=2896.31, samples=2 00:31:27.685 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:31:27.685 lat (usec) : 250=93.70%, 500=5.87%, 750=0.07% 00:31:27.685 lat (msec) : 2=0.03%, 50=0.33% 00:31:27.685 cpu : usr=2.61%, sys=5.02%, ctx=2999, majf=0, minf=1 00:31:27.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.685 issued rwts: total=1462,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.685 job3: (groupid=0, jobs=1): err= 0: pid=1743115: Thu Dec 12 10:45:01 2024 00:31:27.685 read: IOPS=22, BW=89.6KiB/s (91.7kB/s)(92.0KiB/1027msec) 00:31:27.685 slat (nsec): min=9341, max=23987, avg=21870.91, stdev=4129.60 00:31:27.685 clat (usec): min=246, max=41965, avg=39314.50, stdev=8523.02 00:31:27.685 lat (usec): min=269, max=41988, avg=39336.37, stdev=8522.87 00:31:27.685 clat percentiles (usec): 00:31:27.685 | 1.00th=[ 247], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:27.685 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:27.685 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:31:27.685 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:27.685 | 99.99th=[42206] 00:31:27.685 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:31:27.685 slat (nsec): min=9611, max=43857, avg=10602.42, stdev=1820.84 00:31:27.685 clat (usec): min=131, max=399, avg=211.25, stdev=25.49 00:31:27.685 lat (usec): min=141, max=409, avg=221.85, stdev=25.70 00:31:27.685 clat percentiles (usec): 00:31:27.685 | 1.00th=[ 135], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 200], 00:31:27.685 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:31:27.685 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 245], 00:31:27.685 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 400], 99.95th=[ 400], 00:31:27.685 | 99.99th=[ 400] 00:31:27.685 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:31:27.685 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:27.685 lat (usec) : 250=92.90%, 500=2.99% 00:31:27.685 lat (msec) : 50=4.11% 00:31:27.685 cpu : usr=0.29%, sys=0.58%, ctx=536, majf=0, minf=1 00:31:27.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.685 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.685 00:31:27.685 Run status group 0 (all jobs): 00:31:27.685 READ: bw=9216KiB/s (9437kB/s), 89.6KiB/s-5645KiB/s (91.7kB/s-5780kB/s), io=9548KiB (9777kB), run=1001-1036msec 00:31:27.685 WRITE: bw=13.5MiB/s (14.2MB/s), 1994KiB/s-5931KiB/s (2042kB/s-6073kB/s), io=14.0MiB (14.7MB), run=1001-1036msec 00:31:27.685 00:31:27.685 Disk stats (read/write): 00:31:27.685 nvme0n1: ios=343/512, merge=0/0, ticks=1705/105, in_queue=1810, util=97.19% 00:31:27.685 nvme0n2: ios=331/512, merge=0/0, ticks=1723/109, in_queue=1832, util=97.86% 00:31:27.685 nvme0n3: ios=1499/1536, merge=0/0, ticks=1397/222, in_queue=1619, util=99.79% 00:31:27.685 nvme0n4: ios=41/512, merge=0/0, ticks=1641/107, in_queue=1748, util=97.37% 00:31:27.685 10:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:27.685 [global] 00:31:27.685 thread=1 00:31:27.685 invalidate=1 00:31:27.685 rw=write 00:31:27.685 time_based=1 00:31:27.685 runtime=1 00:31:27.685 ioengine=libaio 00:31:27.685 direct=1 00:31:27.685 bs=4096 00:31:27.685 iodepth=128 00:31:27.685 norandommap=0 00:31:27.685 numjobs=1 00:31:27.685 00:31:27.685 verify_dump=1 00:31:27.685 verify_backlog=512 00:31:27.685 verify_state_save=0 00:31:27.685 do_verify=1 00:31:27.685 verify=crc32c-intel 00:31:27.685 [job0] 00:31:27.685 filename=/dev/nvme0n1 00:31:27.685 [job1] 00:31:27.685 filename=/dev/nvme0n2 00:31:27.685 [job2] 00:31:27.685 filename=/dev/nvme0n3 00:31:27.685 [job3] 00:31:27.685 filename=/dev/nvme0n4 00:31:27.685 Could not set queue depth (nvme0n1) 00:31:27.685 Could not set queue depth (nvme0n2) 00:31:27.685 Could not set queue depth (nvme0n3) 00:31:27.685 Could not set queue depth (nvme0n4) 00:31:27.943 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:27.943 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:27.943 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:27.943 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:27.943 fio-3.35 00:31:27.943 Starting 4 threads 00:31:29.314 00:31:29.315 job0: (groupid=0, jobs=1): err= 0: pid=1743554: Thu Dec 12 10:45:03 2024 00:31:29.315 read: IOPS=4701, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1007msec) 00:31:29.315 slat (nsec): min=1323, max=10435k, avg=94396.33, stdev=729282.90 00:31:29.315 clat (usec): min=1470, max=30921, avg=11261.49, stdev=3575.74 00:31:29.315 lat (usec): min=2733, max=30927, avg=11355.89, stdev=3641.53 00:31:29.315 clat percentiles (usec): 00:31:29.315 | 1.00th=[ 6915], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9241], 00:31:29.315 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10552], 00:31:29.315 | 70.00th=[11338], 80.00th=[12125], 90.00th=[16057], 95.00th=[18220], 00:31:29.315 | 99.00th=[26346], 99.50th=[27657], 99.90th=[29754], 99.95th=[29754], 00:31:29.315 | 99.99th=[30802] 00:31:29.315 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:31:29.315 slat (usec): min=2, max=10703, avg=103.20, stdev=660.77 00:31:29.315 clat (usec): min=1475, max=66032, avg=14527.81, stdev=11358.56 00:31:29.315 lat (usec): min=1491, max=66042, avg=14631.00, stdev=11416.23 00:31:29.315 clat percentiles (usec): 00:31:29.315 | 1.00th=[ 3884], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 8455], 00:31:29.315 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10290], 00:31:29.315 | 70.00th=[13566], 80.00th=[19792], 90.00th=[29230], 95.00th=[40109], 00:31:29.315 | 99.00th=[62653], 99.50th=[64226], 99.90th=[65799], 99.95th=[65799], 00:31:29.315 | 99.99th=[65799] 00:31:29.315 bw ( KiB/s): min=20464, max=20480, per=28.07%, avg=20472.00, stdev=11.31, samples=2 00:31:29.315 iops : min= 5116, max= 5120, avg=5118.00, stdev= 2.83, samples=2 00:31:29.315 lat (msec) : 2=0.04%, 4=0.71%, 10=51.21%, 20=37.60%, 50=8.84% 00:31:29.315 lat (msec) : 100=1.60% 00:31:29.315 cpu : usr=2.49%, sys=6.96%, ctx=391, majf=0, minf=2 00:31:29.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:29.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.315 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.315 job1: (groupid=0, jobs=1): err= 0: pid=1743556: Thu Dec 12 10:45:03 2024 00:31:29.315 read: IOPS=5070, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1011msec) 00:31:29.315 slat (nsec): min=1247, max=12040k, avg=93854.94, stdev=547268.94 00:31:29.315 clat (usec): min=3942, max=64192, avg=11328.88, stdev=4424.72 00:31:29.315 lat (usec): min=3949, max=64197, avg=11422.74, stdev=4479.06 00:31:29.315 clat percentiles (usec): 00:31:29.315 | 1.00th=[ 4686], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10028], 00:31:29.315 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:31:29.315 | 70.00th=[11207], 80.00th=[11469], 90.00th=[12387], 95.00th=[14615], 00:31:29.315 | 99.00th=[31589], 99.50th=[44827], 99.90th=[55837], 99.95th=[64226], 00:31:29.315 | 99.99th=[64226] 00:31:29.315 write: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec); 0 zone resets 00:31:29.315 slat (usec): min=2, max=9092, avg=88.48, stdev=436.22 00:31:29.315 clat (usec): min=2907, max=64169, avg=12452.63, stdev=6434.73 00:31:29.315 lat (usec): min=2917, max=64173, avg=12541.11, stdev=6449.52 00:31:29.315 clat percentiles (usec): 00:31:29.315 | 1.00th=[ 7308], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[10159], 00:31:29.315 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:31:29.315 | 70.00th=[11207], 80.00th=[12649], 90.00th=[19530], 95.00th=[20055], 00:31:29.315 | 99.00th=[54264], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:31:29.315 | 99.99th=[64226] 00:31:29.315 bw ( KiB/s): min=19504, max=24576, per=30.22%, avg=22040.00, stdev=3586.45, samples=2 00:31:29.315 iops : min= 4876, max= 6144, avg=5510.00, stdev=896.61, samples=2 00:31:29.315 lat (msec) : 4=0.19%, 10=18.20%, 20=77.59%, 50=3.14%, 100=0.88% 00:31:29.315 cpu : usr=2.87%, sys=4.16%, ctx=572, majf=0, minf=1 00:31:29.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:29.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.315 issued rwts: total=5126,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.315 job2: (groupid=0, jobs=1): err= 0: pid=1743562: Thu Dec 12 10:45:03 2024 00:31:29.315 read: IOPS=2794, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1002msec) 00:31:29.315 slat (nsec): min=1383, max=24926k, avg=150068.94, stdev=1123884.87 00:31:29.315 clat (usec): min=1010, max=91679, avg=20133.99, stdev=14843.67 00:31:29.315 lat (msec): min=4, max=103, avg=20.28, stdev=14.95 00:31:29.315 clat percentiles (usec): 00:31:29.315 | 1.00th=[ 5538], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10421], 00:31:29.315 | 30.00th=[11731], 40.00th=[12911], 50.00th=[13829], 60.00th=[14746], 00:31:29.315 | 70.00th=[17171], 80.00th=[34341], 90.00th=[43779], 95.00th=[49021], 00:31:29.315 | 99.00th=[72877], 99.50th=[84411], 99.90th=[91751], 99.95th=[91751], 00:31:29.315 | 99.99th=[91751] 00:31:29.315 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:31:29.315 slat (usec): min=2, max=15362, avg=171.86, stdev=971.74 00:31:29.315 clat (usec): min=1191, max=116472, avg=22958.69, stdev=22993.08 00:31:29.315 lat (usec): min=1202, max=118169, avg=23130.55, stdev=23131.47 00:31:29.315 clat percentiles (msec): 00:31:29.315 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 9], 20.00th=[ 12], 00:31:29.315 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 17], 00:31:29.315 | 70.00th=[ 23], 80.00th=[ 31], 90.00th=[ 46], 95.00th=[ 89], 00:31:29.315 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 117], 99.95th=[ 117], 00:31:29.315 | 99.99th=[ 117] 00:31:29.315 bw ( KiB/s): min=11000, max=13576, per=16.85%, avg=12288.00, stdev=1821.51, samples=2 00:31:29.315 iops : min= 2750, max= 3394, avg=3072.00, stdev=455.38, samples=2 00:31:29.315 lat (msec) : 2=0.20%, 4=1.81%, 10=10.85%, 20=54.65%, 50=25.72% 00:31:29.315 lat (msec) : 100=5.04%, 250=1.74% 00:31:29.315 cpu : usr=2.60%, sys=4.10%, ctx=315, majf=0, minf=1 00:31:29.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:31:29.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.315 issued rwts: total=2800,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.315 job3: (groupid=0, jobs=1): err= 0: pid=1743563: Thu Dec 12 10:45:03 2024 00:31:29.315 read: IOPS=4520, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1002msec) 00:31:29.315 slat (nsec): min=1311, max=16923k, avg=116245.65, stdev=831910.47 00:31:29.315 clat (usec): min=597, max=57899, avg=14446.93, stdev=5725.26 00:31:29.315 lat (usec): min=1877, max=57911, avg=14563.18, stdev=5795.78 00:31:29.315 clat percentiles (usec): 00:31:29.315 | 1.00th=[ 5080], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[11600], 00:31:29.315 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12911], 60.00th=[13566], 00:31:29.315 | 70.00th=[15270], 80.00th=[16909], 90.00th=[19530], 95.00th=[22676], 00:31:29.315 | 99.00th=[42206], 99.50th=[50070], 99.90th=[57934], 99.95th=[57934], 00:31:29.315 | 99.99th=[57934] 00:31:29.315 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:31:29.315 slat (usec): min=2, max=13504, avg=89.20, stdev=623.51 00:31:29.315 clat (usec): min=1194, max=57855, avg=13324.73, stdev=6760.84 00:31:29.315 lat (usec): min=1200, max=57876, avg=13413.93, stdev=6793.03 00:31:29.315 clat percentiles (usec): 00:31:29.315 | 1.00th=[ 2671], 5.00th=[ 6849], 10.00th=[ 8029], 20.00th=[10683], 00:31:29.315 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:31:29.315 | 70.00th=[12256], 80.00th=[15270], 90.00th=[21627], 95.00th=[23200], 00:31:29.315 | 99.00th=[46400], 99.50th=[51119], 99.90th=[53740], 99.95th=[53740], 00:31:29.315 | 99.99th=[57934] 00:31:29.315 bw ( KiB/s): min=18064, max=18800, per=25.28%, avg=18432.00, stdev=520.43, samples=2 00:31:29.315 iops : min= 4516, max= 4700, avg=4608.00, stdev=130.11, samples=2 00:31:29.315 lat (usec) : 750=0.01% 00:31:29.315 lat (msec) : 2=0.31%, 4=0.93%, 10=13.18%, 20=76.10%, 50=8.84% 00:31:29.315 lat (msec) : 100=0.63% 00:31:29.315 cpu : usr=4.10%, sys=5.79%, ctx=359, majf=0, minf=1 00:31:29.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:29.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.315 issued rwts: total=4530,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.315 00:31:29.315 Run status group 0 (all jobs): 00:31:29.315 READ: bw=66.4MiB/s (69.6MB/s), 10.9MiB/s-19.8MiB/s (11.4MB/s-20.8MB/s), io=67.1MiB (70.4MB), run=1002-1011msec 00:31:29.315 WRITE: bw=71.2MiB/s (74.7MB/s), 12.0MiB/s-21.8MiB/s (12.6MB/s-22.8MB/s), io=72.0MiB (75.5MB), run=1002-1011msec 00:31:29.315 00:31:29.315 Disk stats (read/write): 00:31:29.315 nvme0n1: ios=4262/4608, merge=0/0, ticks=44940/59367, in_queue=104307, util=90.78% 00:31:29.315 nvme0n2: ios=4623/5015, merge=0/0, ticks=16493/22075, in_queue=38568, util=86.89% 00:31:29.315 nvme0n3: ios=2067/2193, merge=0/0, ticks=24753/35313, in_queue=60066, util=98.02% 00:31:29.315 nvme0n4: ios=3607/3962, merge=0/0, ticks=49256/48256, in_queue=97512, util=98.00% 00:31:29.315 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:29.315 [global] 00:31:29.315 thread=1 00:31:29.315 invalidate=1 00:31:29.315 rw=randwrite 00:31:29.315 time_based=1 00:31:29.315 runtime=1 00:31:29.315 ioengine=libaio 00:31:29.315 direct=1 00:31:29.315 bs=4096 00:31:29.315 iodepth=128 00:31:29.315 norandommap=0 00:31:29.315 numjobs=1 00:31:29.315 00:31:29.315 verify_dump=1 00:31:29.315 verify_backlog=512 00:31:29.315 verify_state_save=0 00:31:29.315 do_verify=1 00:31:29.315 verify=crc32c-intel 00:31:29.315 [job0] 00:31:29.315 filename=/dev/nvme0n1 00:31:29.315 [job1] 00:31:29.315 filename=/dev/nvme0n2 00:31:29.315 [job2] 00:31:29.315 filename=/dev/nvme0n3 00:31:29.315 [job3] 00:31:29.315 filename=/dev/nvme0n4 00:31:29.315 Could not set queue depth (nvme0n1) 00:31:29.315 Could not set queue depth (nvme0n2) 00:31:29.315 Could not set queue depth (nvme0n3) 00:31:29.315 Could not set queue depth (nvme0n4) 00:31:29.573 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.573 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.573 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.573 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.573 fio-3.35 00:31:29.573 Starting 4 threads 00:31:30.961 00:31:30.961 job0: (groupid=0, jobs=1): err= 0: pid=1743943: Thu Dec 12 10:45:04 2024 00:31:30.961 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:31:30.961 slat (nsec): min=1101, max=20566k, avg=104104.13, stdev=790532.03 00:31:30.961 clat (usec): min=1325, max=48845, avg=14170.05, stdev=7599.42 00:31:30.961 lat (usec): min=1333, max=48998, avg=14274.15, stdev=7669.98 00:31:30.961 clat percentiles (usec): 00:31:30.961 | 1.00th=[ 4424], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 9765], 00:31:30.961 | 30.00th=[10159], 40.00th=[10683], 50.00th=[10945], 60.00th=[12125], 00:31:30.961 | 70.00th=[13435], 80.00th=[18220], 90.00th=[25560], 95.00th=[31851], 00:31:30.961 | 99.00th=[41681], 99.50th=[45351], 99.90th=[49021], 99.95th=[49021], 00:31:30.961 | 99.99th=[49021] 00:31:30.961 write: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(17.9MiB/1008msec); 0 zone resets 00:31:30.961 slat (nsec): min=1944, max=25242k, avg=118077.53, stdev=947708.84 00:31:30.961 clat (usec): min=481, max=62254, avg=15152.03, stdev=11660.66 00:31:30.961 lat (usec): min=3916, max=62264, avg=15270.11, stdev=11745.27 00:31:30.961 clat percentiles (usec): 00:31:30.961 | 1.00th=[ 5735], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[ 9896], 00:31:30.961 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:31:30.961 | 70.00th=[10814], 80.00th=[14484], 90.00th=[34341], 95.00th=[38536], 00:31:30.961 | 99.00th=[61604], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:31:30.961 | 99.99th=[62129] 00:31:30.961 bw ( KiB/s): min=11856, max=23800, per=26.35%, avg=17828.00, stdev=8445.68, samples=2 00:31:30.961 iops : min= 2964, max= 5950, avg=4457.00, stdev=2111.42, samples=2 00:31:30.961 lat (usec) : 500=0.01% 00:31:30.961 lat (msec) : 2=0.06%, 4=0.13%, 10=25.43%, 20=57.59%, 50=15.32% 00:31:30.961 lat (msec) : 100=1.46% 00:31:30.961 cpu : usr=3.38%, sys=5.06%, ctx=432, majf=0, minf=1 00:31:30.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:30.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.961 issued rwts: total=4096,4585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.961 job1: (groupid=0, jobs=1): err= 0: pid=1743944: Thu Dec 12 10:45:04 2024 00:31:30.961 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:31:30.961 slat (nsec): min=1173, max=15393k, avg=107345.50, stdev=825737.73 00:31:30.961 clat (usec): min=2708, max=35853, avg=15106.55, stdev=6070.20 00:31:30.961 lat (usec): min=2718, max=35921, avg=15213.90, stdev=6131.99 00:31:30.961 clat percentiles (usec): 00:31:30.961 | 1.00th=[ 5211], 5.00th=[ 5997], 10.00th=[ 8848], 20.00th=[ 9765], 00:31:30.961 | 30.00th=[10028], 40.00th=[11994], 50.00th=[15008], 60.00th=[16319], 00:31:30.961 | 70.00th=[18220], 80.00th=[20579], 90.00th=[23462], 95.00th=[26608], 00:31:30.961 | 99.00th=[29492], 99.50th=[31327], 99.90th=[32375], 99.95th=[32375], 00:31:30.961 | 99.99th=[35914] 00:31:30.961 write: IOPS=4353, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1008msec); 0 zone resets 00:31:30.961 slat (usec): min=2, max=18919, avg=115.61, stdev=835.74 00:31:30.961 clat (usec): min=803, max=36723, avg=14999.42, stdev=5944.39 00:31:30.961 lat (usec): min=811, max=36735, avg=15115.02, stdev=6013.46 00:31:30.961 clat percentiles (usec): 00:31:30.961 | 1.00th=[ 3621], 5.00th=[ 7898], 10.00th=[ 9372], 20.00th=[10159], 00:31:30.961 | 30.00th=[10552], 40.00th=[12125], 50.00th=[13435], 60.00th=[15664], 00:31:30.961 | 70.00th=[17695], 80.00th=[20317], 90.00th=[21890], 95.00th=[25297], 00:31:30.961 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:31:30.961 | 99.99th=[36963] 00:31:30.961 bw ( KiB/s): min=16384, max=17696, per=25.18%, avg=17040.00, stdev=927.72, samples=2 00:31:30.961 iops : min= 4096, max= 4424, avg=4260.00, stdev=231.93, samples=2 00:31:30.961 lat (usec) : 1000=0.05% 00:31:30.961 lat (msec) : 2=0.19%, 4=0.55%, 10=23.28%, 20=53.26%, 50=22.67% 00:31:30.961 cpu : usr=3.08%, sys=5.56%, ctx=279, majf=0, minf=1 00:31:30.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:30.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.961 issued rwts: total=4096,4388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.961 job2: (groupid=0, jobs=1): err= 0: pid=1743945: Thu Dec 12 10:45:04 2024 00:31:30.961 read: IOPS=3597, BW=14.1MiB/s (14.7MB/s)(14.7MiB/1045msec) 00:31:30.961 slat (nsec): min=1192, max=16115k, avg=128145.38, stdev=1000579.41 00:31:30.961 clat (usec): min=1579, max=57026, avg=18517.57, stdev=9062.30 00:31:30.961 lat (usec): min=1602, max=57735, avg=18645.71, stdev=9120.75 00:31:30.961 clat percentiles (usec): 00:31:30.961 | 1.00th=[ 3621], 5.00th=[ 5735], 10.00th=[ 8717], 20.00th=[11731], 00:31:30.961 | 30.00th=[12649], 40.00th=[14353], 50.00th=[17957], 60.00th=[20579], 00:31:30.961 | 70.00th=[22152], 80.00th=[24249], 90.00th=[28443], 95.00th=[32637], 00:31:30.961 | 99.00th=[51119], 99.50th=[51119], 99.90th=[56886], 99.95th=[56886], 00:31:30.961 | 99.99th=[56886] 00:31:30.961 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:31:30.961 slat (nsec): min=1836, max=27009k, avg=113203.38, stdev=996515.97 00:31:30.961 clat (usec): min=4644, max=67164, avg=15157.19, stdev=7800.21 00:31:30.961 lat (usec): min=4652, max=67194, avg=15270.39, stdev=7883.16 00:31:30.961 clat percentiles (usec): 00:31:30.961 | 1.00th=[ 7111], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[10814], 00:31:30.961 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[13960], 00:31:30.961 | 70.00th=[15795], 80.00th=[18482], 90.00th=[21627], 95.00th=[25297], 00:31:30.961 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:31:30.961 | 99.99th=[67634] 00:31:30.961 bw ( KiB/s): min=15744, max=17024, per=24.21%, avg=16384.00, stdev=905.10, samples=2 00:31:30.961 iops : min= 3936, max= 4256, avg=4096.00, stdev=226.27, samples=2 00:31:30.961 lat (msec) : 2=0.38%, 4=0.15%, 10=10.34%, 20=60.08%, 50=27.43% 00:31:30.961 lat (msec) : 100=1.62% 00:31:30.961 cpu : usr=2.49%, sys=4.12%, ctx=223, majf=0, minf=1 00:31:30.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:30.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.961 issued rwts: total=3759,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.961 job3: (groupid=0, jobs=1): err= 0: pid=1743946: Thu Dec 12 10:45:04 2024 00:31:30.961 read: IOPS=4581, BW=17.9MiB/s (18.8MB/s)(17.9MiB/1001msec) 00:31:30.961 slat (nsec): min=1342, max=13916k, avg=103707.70, stdev=637302.88 00:31:30.961 clat (usec): min=663, max=36143, avg=13455.43, stdev=4939.47 00:31:30.961 lat (usec): min=673, max=36158, avg=13559.14, stdev=4979.79 00:31:30.961 clat percentiles (usec): 00:31:30.961 | 1.00th=[ 5669], 5.00th=[ 8291], 10.00th=[ 9634], 20.00th=[10421], 00:31:30.961 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:31:30.961 | 70.00th=[13173], 80.00th=[14615], 90.00th=[22152], 95.00th=[25035], 00:31:30.961 | 99.00th=[30540], 99.50th=[31065], 99.90th=[33424], 99.95th=[33817], 00:31:30.961 | 99.99th=[35914] 00:31:30.961 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:31:30.961 slat (usec): min=2, max=13866, avg=105.02, stdev=607.60 00:31:30.961 clat (usec): min=1161, max=42249, avg=14175.00, stdev=5216.38 00:31:30.961 lat (usec): min=1220, max=43848, avg=14280.02, stdev=5269.55 00:31:30.961 clat percentiles (usec): 00:31:30.961 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10945], 20.00th=[11338], 00:31:30.961 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:31:30.961 | 70.00th=[13435], 80.00th=[15664], 90.00th=[21890], 95.00th=[25560], 00:31:30.961 | 99.00th=[33817], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:31:30.961 | 99.99th=[42206] 00:31:30.961 bw ( KiB/s): min=16472, max=20392, per=27.24%, avg=18432.00, stdev=2771.86, samples=2 00:31:30.961 iops : min= 4118, max= 5098, avg=4608.00, stdev=692.96, samples=2 00:31:30.961 lat (usec) : 750=0.02% 00:31:30.961 lat (msec) : 2=0.01%, 10=8.71%, 20=78.34%, 50=12.91% 00:31:30.961 cpu : usr=3.80%, sys=5.50%, ctx=433, majf=0, minf=1 00:31:30.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:30.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.961 issued rwts: total=4586,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.961 00:31:30.961 Run status group 0 (all jobs): 00:31:30.961 READ: bw=61.8MiB/s (64.8MB/s), 14.1MiB/s-17.9MiB/s (14.7MB/s-18.8MB/s), io=64.6MiB (67.7MB), run=1001-1045msec 00:31:30.961 WRITE: bw=66.1MiB/s (69.3MB/s), 15.3MiB/s-18.0MiB/s (16.1MB/s-18.9MB/s), io=69.1MiB (72.4MB), run=1001-1045msec 00:31:30.961 00:31:30.961 Disk stats (read/write): 00:31:30.961 nvme0n1: ios=4018/4096, merge=0/0, ticks=27598/24824, in_queue=52422, util=90.38% 00:31:30.961 nvme0n2: ios=3611/3602, merge=0/0, ticks=29140/28614, in_queue=57754, util=99.09% 00:31:30.961 nvme0n3: ios=3072/3312, merge=0/0, ticks=32862/32878, in_queue=65740, util=89.06% 00:31:30.961 nvme0n4: ios=3584/3947, merge=0/0, ticks=21136/23018, in_queue=44154, util=89.72% 00:31:30.961 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:30.961 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1744168 00:31:30.961 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:30.961 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:30.961 [global] 00:31:30.961 thread=1 00:31:30.962 invalidate=1 00:31:30.962 rw=read 00:31:30.962 time_based=1 00:31:30.962 runtime=10 00:31:30.962 ioengine=libaio 00:31:30.962 direct=1 00:31:30.962 bs=4096 00:31:30.962 iodepth=1 00:31:30.962 norandommap=1 00:31:30.962 numjobs=1 00:31:30.962 00:31:30.962 [job0] 00:31:30.962 filename=/dev/nvme0n1 00:31:30.962 [job1] 00:31:30.962 filename=/dev/nvme0n2 00:31:30.962 [job2] 00:31:30.962 filename=/dev/nvme0n3 00:31:30.962 [job3] 00:31:30.962 filename=/dev/nvme0n4 00:31:30.962 Could not set queue depth (nvme0n1) 00:31:30.962 Could not set queue depth (nvme0n2) 00:31:30.962 Could not set queue depth (nvme0n3) 00:31:30.962 Could not set queue depth (nvme0n4) 00:31:31.232 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:31.232 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:31.232 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:31.232 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:31.232 fio-3.35 00:31:31.232 Starting 4 threads 00:31:33.766 10:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:34.025 10:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:34.025 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=487424, buflen=4096 00:31:34.025 fio: pid=1744365, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:34.284 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=24444928, buflen=4096 00:31:34.284 fio: pid=1744360, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:34.284 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:34.284 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:34.542 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50315264, buflen=4096 00:31:34.542 fio: pid=1744331, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:34.542 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:34.542 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:34.801 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9912320, buflen=4096 00:31:34.801 fio: pid=1744345, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:34.801 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:34.801 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:34.802 00:31:34.802 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1744331: Thu Dec 12 10:45:08 2024 00:31:34.802 read: IOPS=3860, BW=15.1MiB/s (15.8MB/s)(48.0MiB/3182msec) 00:31:34.802 slat (usec): min=5, max=26278, avg=12.06, stdev=306.64 00:31:34.802 clat (usec): min=162, max=41270, avg=243.51, stdev=1229.58 00:31:34.802 lat (usec): min=169, max=56062, avg=255.56, stdev=1306.16 00:31:34.802 clat percentiles (usec): 00:31:34.802 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 182], 00:31:34.802 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:31:34.802 | 70.00th=[ 212], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 258], 00:31:34.802 | 99.00th=[ 318], 99.50th=[ 379], 99.90th=[11863], 99.95th=[41157], 00:31:34.802 | 99.99th=[41157] 00:31:34.802 bw ( KiB/s): min=12024, max=20640, per=64.57%, avg=15827.00, stdev=3578.96, samples=6 00:31:34.802 iops : min= 3006, max= 5160, avg=3956.67, stdev=894.73, samples=6 00:31:34.802 lat (usec) : 250=88.92%, 500=10.94%, 750=0.02% 00:31:34.802 lat (msec) : 10=0.01%, 20=0.02%, 50=0.09% 00:31:34.802 cpu : usr=1.32%, sys=3.33%, ctx=12291, majf=0, minf=1 00:31:34.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.802 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.802 issued rwts: total=12285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.802 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1744345: Thu Dec 12 10:45:08 2024 00:31:34.802 read: IOPS=713, BW=2853KiB/s (2921kB/s)(9680KiB/3393msec) 00:31:34.802 slat (usec): min=7, max=14765, avg=19.06, stdev=331.10 00:31:34.802 clat (usec): min=194, max=41959, avg=1371.75, stdev=6634.70 00:31:34.802 lat (usec): min=205, max=55941, avg=1390.81, stdev=6698.43 00:31:34.802 clat percentiles (usec): 00:31:34.802 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:31:34.802 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:31:34.802 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 322], 95.00th=[ 441], 00:31:34.802 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:34.802 | 99.99th=[42206] 00:31:34.802 bw ( KiB/s): min= 93, max=14656, per=13.11%, avg=3214.17, stdev=5845.15, samples=6 00:31:34.802 iops : min= 23, max= 3664, avg=803.50, stdev=1461.31, samples=6 00:31:34.802 lat (usec) : 250=55.72%, 500=40.85%, 750=0.66% 00:31:34.802 lat (msec) : 50=2.73% 00:31:34.802 cpu : usr=0.35%, sys=1.33%, ctx=2424, majf=0, minf=2 00:31:34.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.802 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.802 issued rwts: total=2421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.802 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1744360: Thu Dec 12 10:45:08 2024 00:31:34.802 read: IOPS=2020, BW=8081KiB/s (8275kB/s)(23.3MiB/2954msec) 00:31:34.802 slat (nsec): min=5714, max=59022, avg=7974.18, stdev=2079.47 00:31:34.802 clat (usec): min=163, max=41273, avg=481.82, stdev=3178.48 00:31:34.802 lat (usec): min=171, max=41281, avg=489.79, stdev=3179.59 00:31:34.802 clat percentiles (usec): 00:31:34.802 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:31:34.802 | 30.00th=[ 204], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 241], 00:31:34.802 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 277], 00:31:34.802 | 99.00th=[ 441], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:34.802 | 99.99th=[41157] 00:31:34.802 bw ( KiB/s): min= 96, max=15152, per=34.23%, avg=8390.40, stdev=6582.55, samples=5 00:31:34.802 iops : min= 24, max= 3788, avg=2097.60, stdev=1645.64, samples=5 00:31:34.802 lat (usec) : 250=76.13%, 500=23.22% 00:31:34.802 lat (msec) : 50=0.64% 00:31:34.802 cpu : usr=0.54%, sys=2.07%, ctx=5970, majf=0, minf=1 00:31:34.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.802 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.802 issued rwts: total=5969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.802 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1744365: Thu Dec 12 10:45:08 2024 00:31:34.802 read: IOPS=43, BW=174KiB/s (178kB/s)(476KiB/2732msec) 00:31:34.802 slat (nsec): min=6833, max=32473, avg=10430.30, stdev=3507.23 00:31:34.802 clat (usec): min=181, max=44974, avg=22845.79, stdev=20331.34 00:31:34.802 lat (usec): min=189, max=44985, avg=22856.20, stdev=20331.52 00:31:34.802 clat percentiles (usec): 00:31:34.802 | 1.00th=[ 190], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 237], 00:31:34.802 | 30.00th=[ 253], 40.00th=[ 392], 50.00th=[40633], 60.00th=[40633], 00:31:34.802 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:34.802 | 99.00th=[42206], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:31:34.802 | 99.99th=[44827] 00:31:34.802 bw ( KiB/s): min= 144, max= 208, per=0.71%, avg=174.40, stdev=25.55, samples=5 00:31:34.802 iops : min= 36, max= 52, avg=43.60, stdev= 6.39, samples=5 00:31:34.802 lat (usec) : 250=29.17%, 500=15.00% 00:31:34.802 lat (msec) : 50=55.00% 00:31:34.802 cpu : usr=0.00%, sys=0.07%, ctx=121, majf=0, minf=2 00:31:34.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.802 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.802 issued rwts: total=120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.802 00:31:34.802 Run status group 0 (all jobs): 00:31:34.802 READ: bw=23.9MiB/s (25.1MB/s), 174KiB/s-15.1MiB/s (178kB/s-15.8MB/s), io=81.2MiB (85.2MB), run=2732-3393msec 00:31:34.802 00:31:34.802 Disk stats (read/write): 00:31:34.802 nvme0n1: ios=12317/0, merge=0/0, ticks=3738/0, in_queue=3738, util=98.21% 00:31:34.802 nvme0n2: ios=2419/0, merge=0/0, ticks=3252/0, in_queue=3252, util=95.81% 00:31:34.802 nvme0n3: ios=5966/0, merge=0/0, ticks=2766/0, in_queue=2766, util=96.55% 00:31:34.802 nvme0n4: ios=156/0, merge=0/0, ticks=3641/0, in_queue=3641, util=99.44% 00:31:34.802 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:34.802 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:35.060 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:35.061 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:35.319 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:35.319 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:35.577 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:35.577 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1744168 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:35.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:35.836 nvmf hotplug test: fio failed as expected 00:31:35.836 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:36.095 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:36.095 rmmod nvme_tcp 00:31:36.095 rmmod nvme_fabrics 00:31:36.095 rmmod nvme_keyring 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1741619 ']' 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1741619 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1741619 ']' 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1741619 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1741619 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1741619' 00:31:36.354 killing process with pid 1741619 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1741619 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1741619 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.354 10:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.888 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:38.888 00:31:38.888 real 0m25.885s 00:31:38.888 user 1m31.567s 00:31:38.888 sys 0m10.922s 00:31:38.888 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.888 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:38.888 ************************************ 00:31:38.888 END TEST nvmf_fio_target 00:31:38.888 ************************************ 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:38.889 ************************************ 00:31:38.889 START TEST nvmf_bdevio 00:31:38.889 ************************************ 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:38.889 * Looking for test storage... 00:31:38.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:38.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.889 --rc genhtml_branch_coverage=1 00:31:38.889 --rc genhtml_function_coverage=1 00:31:38.889 --rc genhtml_legend=1 00:31:38.889 --rc geninfo_all_blocks=1 00:31:38.889 --rc geninfo_unexecuted_blocks=1 00:31:38.889 00:31:38.889 ' 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:38.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.889 --rc genhtml_branch_coverage=1 00:31:38.889 --rc genhtml_function_coverage=1 00:31:38.889 --rc genhtml_legend=1 00:31:38.889 --rc geninfo_all_blocks=1 00:31:38.889 --rc geninfo_unexecuted_blocks=1 00:31:38.889 00:31:38.889 ' 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:38.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.889 --rc genhtml_branch_coverage=1 00:31:38.889 --rc genhtml_function_coverage=1 00:31:38.889 --rc genhtml_legend=1 00:31:38.889 --rc geninfo_all_blocks=1 00:31:38.889 --rc geninfo_unexecuted_blocks=1 00:31:38.889 00:31:38.889 ' 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:38.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.889 --rc genhtml_branch_coverage=1 00:31:38.889 --rc genhtml_function_coverage=1 00:31:38.889 --rc genhtml_legend=1 00:31:38.889 --rc geninfo_all_blocks=1 00:31:38.889 --rc geninfo_unexecuted_blocks=1 00:31:38.889 00:31:38.889 ' 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.889 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:38.890 10:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:45.464 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:45.465 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:45.465 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:45.465 Found net devices under 0000:af:00.0: cvl_0_0 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:45.465 Found net devices under 0000:af:00.1: cvl_0_1 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:45.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:45.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:31:45.465 00:31:45.465 --- 10.0.0.2 ping statistics --- 00:31:45.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.465 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:45.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:45.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:31:45.465 00:31:45.465 --- 10.0.0.1 ping statistics --- 00:31:45.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.465 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1749064 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1749064 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1749064 ']' 00:31:45.465 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.466 [2024-12-12 10:45:18.630737] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:45.466 [2024-12-12 10:45:18.631666] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:31:45.466 [2024-12-12 10:45:18.631699] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.466 [2024-12-12 10:45:18.708774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:45.466 [2024-12-12 10:45:18.748575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.466 [2024-12-12 10:45:18.748635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.466 [2024-12-12 10:45:18.748643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.466 [2024-12-12 10:45:18.748649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.466 [2024-12-12 10:45:18.748653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.466 [2024-12-12 10:45:18.750017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:31:45.466 [2024-12-12 10:45:18.750130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:31:45.466 [2024-12-12 10:45:18.750211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:45.466 [2024-12-12 10:45:18.750212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:31:45.466 [2024-12-12 10:45:18.817577] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:45.466 [2024-12-12 10:45:18.818395] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:45.466 [2024-12-12 10:45:18.818620] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:45.466 [2024-12-12 10:45:18.819094] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:45.466 [2024-12-12 10:45:18.819131] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.466 [2024-12-12 10:45:18.899048] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.466 Malloc0 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.466 [2024-12-12 10:45:18.975195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:45.466 { 00:31:45.466 "params": { 00:31:45.466 "name": "Nvme$subsystem", 00:31:45.466 "trtype": "$TEST_TRANSPORT", 00:31:45.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.466 "adrfam": "ipv4", 00:31:45.466 "trsvcid": "$NVMF_PORT", 00:31:45.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.466 "hdgst": ${hdgst:-false}, 00:31:45.466 "ddgst": ${ddgst:-false} 00:31:45.466 }, 00:31:45.466 "method": "bdev_nvme_attach_controller" 00:31:45.466 } 00:31:45.466 EOF 00:31:45.466 )") 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:45.466 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:45.466 "params": { 00:31:45.466 "name": "Nvme1", 00:31:45.466 "trtype": "tcp", 00:31:45.466 "traddr": "10.0.0.2", 00:31:45.466 "adrfam": "ipv4", 00:31:45.466 "trsvcid": "4420", 00:31:45.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:45.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:45.466 "hdgst": false, 00:31:45.466 "ddgst": false 00:31:45.466 }, 00:31:45.466 "method": "bdev_nvme_attach_controller" 00:31:45.466 }' 00:31:45.466 [2024-12-12 10:45:19.023212] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:31:45.466 [2024-12-12 10:45:19.023258] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749093 ] 00:31:45.466 [2024-12-12 10:45:19.098796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:45.466 [2024-12-12 10:45:19.141841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.466 [2024-12-12 10:45:19.141874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.466 [2024-12-12 10:45:19.141875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:45.466 I/O targets: 00:31:45.466 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:45.466 00:31:45.466 00:31:45.466 CUnit - A unit testing framework for C - Version 2.1-3 00:31:45.466 http://cunit.sourceforge.net/ 00:31:45.466 00:31:45.466 00:31:45.466 Suite: bdevio tests on: Nvme1n1 00:31:45.725 Test: blockdev write read block ...passed 00:31:45.725 Test: blockdev write zeroes read block ...passed 00:31:45.725 Test: blockdev write zeroes read no split ...passed 00:31:45.725 Test: blockdev write zeroes read split ...passed 00:31:45.725 Test: blockdev write zeroes read split partial ...passed 00:31:45.725 Test: blockdev reset ...[2024-12-12 10:45:19.558937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:45.725 [2024-12-12 10:45:19.558999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1828610 (9): Bad file descriptor 00:31:45.725 [2024-12-12 10:45:19.610549] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:45.725 passed 00:31:45.725 Test: blockdev write read 8 blocks ...passed 00:31:45.725 Test: blockdev write read size > 128k ...passed 00:31:45.725 Test: blockdev write read invalid size ...passed 00:31:45.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:45.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:45.725 Test: blockdev write read max offset ...passed 00:31:45.984 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:45.984 Test: blockdev writev readv 8 blocks ...passed 00:31:45.984 Test: blockdev writev readv 30 x 1block ...passed 00:31:45.984 Test: blockdev writev readv block ...passed 00:31:45.984 Test: blockdev writev readv size > 128k ...passed 00:31:45.984 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:45.984 Test: blockdev comparev and writev ...[2024-12-12 10:45:19.861357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.984 [2024-12-12 10:45:19.861386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.861400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.984 [2024-12-12 10:45:19.861408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.861705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.984 [2024-12-12 10:45:19.861716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.861728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.984 [2024-12-12 10:45:19.861736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.862023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.984 [2024-12-12 10:45:19.862033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.862049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.984 [2024-12-12 10:45:19.862056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.862348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.984 [2024-12-12 10:45:19.862358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.862370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:45.984 [2024-12-12 10:45:19.862378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:45.984 passed 00:31:45.984 Test: blockdev nvme passthru rw ...passed 00:31:45.984 Test: blockdev nvme passthru vendor specific ...[2024-12-12 10:45:19.944916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:45.984 [2024-12-12 10:45:19.944934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.945046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:45.984 [2024-12-12 10:45:19.945056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.945166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:45.984 [2024-12-12 10:45:19.945176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:45.984 [2024-12-12 10:45:19.945288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:45.984 [2024-12-12 10:45:19.945298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:45.984 passed 00:31:45.984 Test: blockdev nvme admin passthru ...passed 00:31:45.984 Test: blockdev copy ...passed 00:31:45.984 00:31:45.984 Run Summary: Type Total Ran Passed Failed Inactive 00:31:45.984 suites 1 1 n/a 0 0 00:31:45.984 tests 23 23 23 0 0 00:31:45.984 asserts 152 152 152 0 n/a 00:31:45.984 00:31:45.984 Elapsed time = 1.107 seconds 00:31:46.242 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.242 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.242 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.242 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.242 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.243 rmmod nvme_tcp 00:31:46.243 rmmod nvme_fabrics 00:31:46.243 rmmod nvme_keyring 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1749064 ']' 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1749064 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1749064 ']' 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1749064 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.243 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749064 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749064' 00:31:46.501 killing process with pid 1749064 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1749064 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1749064 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:46.501 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:46.502 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:46.502 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.502 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.502 10:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.036 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.036 00:31:49.036 real 0m10.020s 00:31:49.036 user 0m9.526s 00:31:49.036 sys 0m5.108s 00:31:49.036 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.036 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.036 ************************************ 00:31:49.036 END TEST nvmf_bdevio 00:31:49.036 ************************************ 00:31:49.036 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:49.036 00:31:49.036 real 4m33.218s 00:31:49.036 user 9m8.639s 00:31:49.036 sys 1m49.696s 00:31:49.036 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.036 10:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.036 ************************************ 00:31:49.036 END TEST nvmf_target_core_interrupt_mode 00:31:49.036 ************************************ 00:31:49.036 10:45:22 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:49.036 10:45:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.036 10:45:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.036 10:45:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:49.036 ************************************ 00:31:49.036 START TEST nvmf_interrupt 00:31:49.036 ************************************ 00:31:49.036 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:49.036 * Looking for test storage... 00:31:49.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.036 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:49.036 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:31:49.036 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:49.036 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:49.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.037 --rc genhtml_branch_coverage=1 00:31:49.037 --rc genhtml_function_coverage=1 00:31:49.037 --rc genhtml_legend=1 00:31:49.037 --rc geninfo_all_blocks=1 00:31:49.037 --rc geninfo_unexecuted_blocks=1 00:31:49.037 00:31:49.037 ' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:49.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.037 --rc genhtml_branch_coverage=1 00:31:49.037 --rc genhtml_function_coverage=1 00:31:49.037 --rc genhtml_legend=1 00:31:49.037 --rc geninfo_all_blocks=1 00:31:49.037 --rc geninfo_unexecuted_blocks=1 00:31:49.037 00:31:49.037 ' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:49.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.037 --rc genhtml_branch_coverage=1 00:31:49.037 --rc genhtml_function_coverage=1 00:31:49.037 --rc genhtml_legend=1 00:31:49.037 --rc geninfo_all_blocks=1 00:31:49.037 --rc geninfo_unexecuted_blocks=1 00:31:49.037 00:31:49.037 ' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:49.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.037 --rc genhtml_branch_coverage=1 00:31:49.037 --rc genhtml_function_coverage=1 00:31:49.037 --rc genhtml_legend=1 00:31:49.037 --rc geninfo_all_blocks=1 00:31:49.037 --rc geninfo_unexecuted_blocks=1 00:31:49.037 00:31:49.037 ' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.037 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.606 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.606 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.606 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.606 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.606 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.606 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.606 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.606 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.606 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:55.607 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:55.607 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:55.607 Found net devices under 0000:af:00.0: cvl_0_0 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:55.607 Found net devices under 0000:af:00.1: cvl_0_1 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:31:55.607 00:31:55.607 --- 10.0.0.2 ping statistics --- 00:31:55.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.607 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:31:55.607 00:31:55.607 --- 10.0.0.1 ping statistics --- 00:31:55.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.607 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1752790 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1752790 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1752790 ']' 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.607 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.607 [2024-12-12 10:45:28.781523] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.608 [2024-12-12 10:45:28.782423] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:31:55.608 [2024-12-12 10:45:28.782456] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.608 [2024-12-12 10:45:28.860912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:55.608 [2024-12-12 10:45:28.901344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.608 [2024-12-12 10:45:28.901378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.608 [2024-12-12 10:45:28.901386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.608 [2024-12-12 10:45:28.901392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.608 [2024-12-12 10:45:28.901396] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.608 [2024-12-12 10:45:28.902489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.608 [2024-12-12 10:45:28.902490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.608 [2024-12-12 10:45:28.969644] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.608 [2024-12-12 10:45:28.970216] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:55.608 [2024-12-12 10:45:28.970413] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.608 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.608 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:55.608 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:55.608 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:55.608 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:55.608 5000+0 records in 00:31:55.608 5000+0 records out 00:31:55.608 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0165738 s, 618 MB/s 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.608 AIO0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.608 [2024-12-12 10:45:29.091285] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:55.608 [2024-12-12 10:45:29.127535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1752790 0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1752790 0 idle 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1752790 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1752790 -w 256 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1752790 root 20 0 128.2g 46848 34560 S 6.7 0.1 0:00.25 reactor_0' 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1752790 root 20 0 128.2g 46848 34560 S 6.7 0.1 0:00.25 reactor_0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1752790 1 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1752790 1 idle 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1752790 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1752790 -w 256 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1752800 root 20 0 128.2g 46848 34560 S 0.0 0.1 0:00.00 reactor_1' 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1752800 root 20 0 128.2g 46848 34560 S 0.0 0.1 0:00.00 reactor_1 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1752869 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1752790 0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1752790 0 busy 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1752790 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1752790 -w 256 00:31:55.608 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1752790 root 20 0 128.2g 47616 34560 R 99.9 0.1 0:00.41 reactor_0' 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1752790 root 20 0 128.2g 47616 34560 R 99.9 0.1 0:00.41 reactor_0 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1752790 1 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1752790 1 busy 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1752790 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1752790 -w 256 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1752800 root 20 0 128.2g 47616 34560 R 93.8 0.1 0:00.26 reactor_1' 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1752800 root 20 0 128.2g 47616 34560 R 93.8 0.1 0:00.26 reactor_1 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:55.867 10:45:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1752869 00:32:05.838 Initializing NVMe Controllers 00:32:05.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:05.838 Controller IO queue size 256, less than required. 00:32:05.838 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:05.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:05.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:05.838 Initialization complete. Launching workers. 00:32:05.838 ======================================================== 00:32:05.838 Latency(us) 00:32:05.838 Device Information : IOPS MiB/s Average min max 00:32:05.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16897.40 66.01 15156.81 3259.87 32336.30 00:32:05.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 17067.20 66.67 15005.08 5481.67 29715.57 00:32:05.838 ======================================================== 00:32:05.838 Total : 33964.60 132.67 15080.57 3259.87 32336.30 00:32:05.838 00:32:05.838 [2024-12-12 10:45:39.639145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd839f0 is same with the state(6) to be set 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1752790 0 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1752790 0 idle 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1752790 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1752790 -w 256 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1752790 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:20.25 reactor_0' 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1752790 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:20.25 reactor_0 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1752790 1 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1752790 1 idle 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1752790 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.838 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.839 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.839 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1752790 -w 256 00:32:05.839 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:06.097 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1752800 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:10.00 reactor_1' 00:32:06.097 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1752800 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:10.00 reactor_1 00:32:06.097 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.098 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.098 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:06.098 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:06.098 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:06.098 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:06.098 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:06.098 10:45:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.098 10:45:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:06.665 10:45:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:06.665 10:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:06.665 10:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:06.665 10:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:06.665 10:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1752790 0 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1752790 0 idle 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1752790 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1752790 -w 256 00:32:08.570 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1752790 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.49 reactor_0' 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1752790 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.49 reactor_0 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1752790 1 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1752790 1 idle 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1752790 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1752790 -w 256 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1752800 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.10 reactor_1' 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1752800 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.10 reactor_1 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.829 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:09.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:09.088 10:45:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.089 10:45:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.089 rmmod nvme_tcp 00:32:09.089 rmmod nvme_fabrics 00:32:09.089 rmmod nvme_keyring 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1752790 ']' 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1752790 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1752790 ']' 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1752790 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752790 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752790' 00:32:09.089 killing process with pid 1752790 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1752790 00:32:09.089 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1752790 00:32:09.347 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:09.347 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.348 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.887 10:45:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:11.887 00:32:11.887 real 0m22.706s 00:32:11.887 user 0m39.604s 00:32:11.887 sys 0m8.339s 00:32:11.887 10:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.887 10:45:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:11.887 ************************************ 00:32:11.887 END TEST nvmf_interrupt 00:32:11.887 ************************************ 00:32:11.887 00:32:11.887 real 27m20.172s 00:32:11.887 user 56m19.215s 00:32:11.887 sys 9m12.132s 00:32:11.887 10:45:45 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.887 10:45:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.887 ************************************ 00:32:11.887 END TEST nvmf_tcp 00:32:11.887 ************************************ 00:32:11.887 10:45:45 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:11.887 10:45:45 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:11.887 10:45:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:11.887 10:45:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.887 10:45:45 -- common/autotest_common.sh@10 -- # set +x 00:32:11.887 ************************************ 00:32:11.887 START TEST spdkcli_nvmf_tcp 00:32:11.887 ************************************ 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:11.887 * Looking for test storage... 00:32:11.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:11.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.887 --rc genhtml_branch_coverage=1 00:32:11.887 --rc genhtml_function_coverage=1 00:32:11.887 --rc genhtml_legend=1 00:32:11.887 --rc geninfo_all_blocks=1 00:32:11.887 --rc geninfo_unexecuted_blocks=1 00:32:11.887 00:32:11.887 ' 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:11.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.887 --rc genhtml_branch_coverage=1 00:32:11.887 --rc genhtml_function_coverage=1 00:32:11.887 --rc genhtml_legend=1 00:32:11.887 --rc geninfo_all_blocks=1 00:32:11.887 --rc geninfo_unexecuted_blocks=1 00:32:11.887 00:32:11.887 ' 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:11.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.887 --rc genhtml_branch_coverage=1 00:32:11.887 --rc genhtml_function_coverage=1 00:32:11.887 --rc genhtml_legend=1 00:32:11.887 --rc geninfo_all_blocks=1 00:32:11.887 --rc geninfo_unexecuted_blocks=1 00:32:11.887 00:32:11.887 ' 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:11.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.887 --rc genhtml_branch_coverage=1 00:32:11.887 --rc genhtml_function_coverage=1 00:32:11.887 --rc genhtml_legend=1 00:32:11.887 --rc geninfo_all_blocks=1 00:32:11.887 --rc geninfo_unexecuted_blocks=1 00:32:11.887 00:32:11.887 ' 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.887 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:11.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1755665 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1755665 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1755665 ']' 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.888 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.888 [2024-12-12 10:45:45.715812] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:32:11.888 [2024-12-12 10:45:45.715859] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755665 ] 00:32:11.888 [2024-12-12 10:45:45.789287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:11.888 [2024-12-12 10:45:45.832925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.888 [2024-12-12 10:45:45.832927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:12.146 10:45:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:12.146 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:12.146 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:12.146 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:12.146 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:12.146 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:12.146 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:12.146 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:12.146 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:12.146 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:12.146 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:12.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:12.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:12.147 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:12.147 ' 00:32:14.693 [2024-12-12 10:45:48.646924] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.133 [2024-12-12 10:45:49.983349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:18.658 [2024-12-12 10:45:52.470994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:21.184 [2024-12-12 10:45:54.613627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:22.555 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:22.555 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:22.555 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:22.555 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:22.555 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:22.555 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:22.555 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:22.556 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:22.556 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:22.556 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:22.556 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:22.556 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:22.556 10:45:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:22.556 10:45:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:22.556 10:45:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.556 10:45:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:22.556 10:45:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:22.556 10:45:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.556 10:45:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:22.556 10:45:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:22.813 10:45:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:23.071 10:45:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:23.071 10:45:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:23.071 10:45:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:23.071 10:45:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.071 10:45:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:23.071 10:45:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.071 10:45:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.071 10:45:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:23.071 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:23.071 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:23.071 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:23.071 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:23.071 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:23.071 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:23.071 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:23.071 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:23.071 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:23.071 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:23.071 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:23.071 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:23.071 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:23.071 ' 00:32:29.628 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:29.628 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:29.628 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:29.628 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:29.628 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:29.628 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:29.628 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:29.628 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:29.628 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:29.628 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:29.628 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:29.628 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:29.628 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:29.628 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1755665 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1755665 ']' 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1755665 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1755665 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1755665' 00:32:29.628 killing process with pid 1755665 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1755665 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1755665 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1755665 ']' 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1755665 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1755665 ']' 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1755665 00:32:29.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1755665) - No such process 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1755665 is not found' 00:32:29.628 Process with pid 1755665 is not found 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:29.628 00:32:29.628 real 0m17.297s 00:32:29.628 user 0m38.075s 00:32:29.628 sys 0m0.792s 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.628 10:46:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.628 ************************************ 00:32:29.628 END TEST spdkcli_nvmf_tcp 00:32:29.628 ************************************ 00:32:29.628 10:46:02 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:29.628 10:46:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:29.628 10:46:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.628 10:46:02 -- common/autotest_common.sh@10 -- # set +x 00:32:29.628 ************************************ 00:32:29.628 START TEST nvmf_identify_passthru 00:32:29.628 ************************************ 00:32:29.628 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:29.628 * Looking for test storage... 00:32:29.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:29.628 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:29.628 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:32:29.628 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:29.628 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:29.628 10:46:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:29.628 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.628 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:29.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.628 --rc genhtml_branch_coverage=1 00:32:29.628 --rc genhtml_function_coverage=1 00:32:29.628 --rc genhtml_legend=1 00:32:29.628 --rc geninfo_all_blocks=1 00:32:29.628 --rc geninfo_unexecuted_blocks=1 00:32:29.628 00:32:29.628 ' 00:32:29.628 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:29.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.628 --rc genhtml_branch_coverage=1 00:32:29.628 --rc genhtml_function_coverage=1 00:32:29.628 --rc genhtml_legend=1 00:32:29.629 --rc geninfo_all_blocks=1 00:32:29.629 --rc geninfo_unexecuted_blocks=1 00:32:29.629 00:32:29.629 ' 00:32:29.629 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:29.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.629 --rc genhtml_branch_coverage=1 00:32:29.629 --rc genhtml_function_coverage=1 00:32:29.629 --rc genhtml_legend=1 00:32:29.629 --rc geninfo_all_blocks=1 00:32:29.629 --rc geninfo_unexecuted_blocks=1 00:32:29.629 00:32:29.629 ' 00:32:29.629 10:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:29.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.629 --rc genhtml_branch_coverage=1 00:32:29.629 --rc genhtml_function_coverage=1 00:32:29.629 --rc genhtml_legend=1 00:32:29.629 --rc geninfo_all_blocks=1 00:32:29.629 --rc geninfo_unexecuted_blocks=1 00:32:29.629 00:32:29.629 ' 00:32:29.629 10:46:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.629 10:46:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.629 10:46:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.629 10:46:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.629 10:46:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.629 10:46:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:29.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:29.629 10:46:03 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.629 10:46:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.629 10:46:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.629 10:46:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.629 10:46:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:29.629 10:46:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.629 10:46:03 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.629 10:46:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:29.629 10:46:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:29.629 10:46:03 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:29.629 10:46:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.903 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:34.904 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:34.904 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:34.904 Found net devices under 0000:af:00.0: cvl_0_0 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:34.904 Found net devices under 0000:af:00.1: cvl_0_1 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.904 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:35.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:32:35.163 00:32:35.163 --- 10.0.0.2 ping statistics --- 00:32:35.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.163 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:32:35.163 00:32:35.163 --- 10.0.0.1 ping statistics --- 00:32:35.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.163 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:35.163 10:46:08 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:35.163 10:46:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:35.163 10:46:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:35.163 10:46:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:35.163 10:46:09 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:35.163 10:46:09 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:35.163 10:46:09 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:35.163 10:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:35.163 10:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:35.163 10:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:35.163 10:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:35.163 10:46:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:39.347 10:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:32:39.348 10:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:39.348 10:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:39.348 10:46:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:43.532 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:43.532 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:43.532 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:43.532 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1762768 00:32:43.532 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:43.532 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:43.532 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1762768 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1762768 ']' 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.532 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:43.532 [2024-12-12 10:46:17.481175] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:32:43.532 [2024-12-12 10:46:17.481221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.789 [2024-12-12 10:46:17.556047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:43.789 [2024-12-12 10:46:17.598925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.789 [2024-12-12 10:46:17.598963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.789 [2024-12-12 10:46:17.598970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.789 [2024-12-12 10:46:17.598976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.789 [2024-12-12 10:46:17.598981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.789 [2024-12-12 10:46:17.600381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.789 [2024-12-12 10:46:17.600494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:43.789 [2024-12-12 10:46:17.600611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.789 [2024-12-12 10:46:17.600611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:43.789 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:43.789 INFO: Log level set to 20 00:32:43.789 INFO: Requests: 00:32:43.789 { 00:32:43.789 "jsonrpc": "2.0", 00:32:43.789 "method": "nvmf_set_config", 00:32:43.789 "id": 1, 00:32:43.789 "params": { 00:32:43.789 "admin_cmd_passthru": { 00:32:43.789 "identify_ctrlr": true 00:32:43.789 } 00:32:43.789 } 00:32:43.789 } 00:32:43.789 00:32:43.789 INFO: response: 00:32:43.789 { 00:32:43.789 "jsonrpc": "2.0", 00:32:43.789 "id": 1, 00:32:43.789 "result": true 00:32:43.789 } 00:32:43.789 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.789 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:43.789 INFO: Setting log level to 20 00:32:43.789 INFO: Setting log level to 20 00:32:43.789 INFO: Log level set to 20 00:32:43.789 INFO: Log level set to 20 00:32:43.789 INFO: Requests: 00:32:43.789 { 00:32:43.789 "jsonrpc": "2.0", 00:32:43.789 "method": "framework_start_init", 00:32:43.789 "id": 1 00:32:43.789 } 00:32:43.789 00:32:43.789 INFO: Requests: 00:32:43.789 { 00:32:43.789 "jsonrpc": "2.0", 00:32:43.789 "method": "framework_start_init", 00:32:43.789 "id": 1 00:32:43.789 } 00:32:43.789 00:32:43.789 [2024-12-12 10:46:17.709292] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:43.789 INFO: response: 00:32:43.789 { 00:32:43.789 "jsonrpc": "2.0", 00:32:43.789 "id": 1, 00:32:43.789 "result": true 00:32:43.789 } 00:32:43.789 00:32:43.789 INFO: response: 00:32:43.789 { 00:32:43.789 "jsonrpc": "2.0", 00:32:43.789 "id": 1, 00:32:43.789 "result": true 00:32:43.789 } 00:32:43.789 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.789 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:43.789 INFO: Setting log level to 40 00:32:43.789 INFO: Setting log level to 40 00:32:43.789 INFO: Setting log level to 40 00:32:43.789 [2024-12-12 10:46:17.722594] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.789 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:43.789 10:46:17 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.789 10:46:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.068 Nvme0n1 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.069 [2024-12-12 10:46:20.645930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.069 [ 00:32:47.069 { 00:32:47.069 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:47.069 "subtype": "Discovery", 00:32:47.069 "listen_addresses": [], 00:32:47.069 "allow_any_host": true, 00:32:47.069 "hosts": [] 00:32:47.069 }, 00:32:47.069 { 00:32:47.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:47.069 "subtype": "NVMe", 00:32:47.069 "listen_addresses": [ 00:32:47.069 { 00:32:47.069 "trtype": "TCP", 00:32:47.069 "adrfam": "IPv4", 00:32:47.069 "traddr": "10.0.0.2", 00:32:47.069 "trsvcid": "4420" 00:32:47.069 } 00:32:47.069 ], 00:32:47.069 "allow_any_host": true, 00:32:47.069 "hosts": [], 00:32:47.069 "serial_number": "SPDK00000000000001", 00:32:47.069 "model_number": "SPDK bdev Controller", 00:32:47.069 "max_namespaces": 1, 00:32:47.069 "min_cntlid": 1, 00:32:47.069 "max_cntlid": 65519, 00:32:47.069 "namespaces": [ 00:32:47.069 { 00:32:47.069 "nsid": 1, 00:32:47.069 "bdev_name": "Nvme0n1", 00:32:47.069 "name": "Nvme0n1", 00:32:47.069 "nguid": "74C719E08679431D9EE82E60E66AEA66", 00:32:47.069 "uuid": "74c719e0-8679-431d-9ee8-2e60e66aea66" 00:32:47.069 } 00:32:47.069 ] 00:32:47.069 } 00:32:47.069 ] 00:32:47.069 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:47.069 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:47.326 10:46:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:47.326 10:46:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:32:47.326 10:46:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:47.326 10:46:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.326 10:46:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:47.326 10:46:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.326 rmmod nvme_tcp 00:32:47.326 rmmod nvme_fabrics 00:32:47.326 rmmod nvme_keyring 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1762768 ']' 00:32:47.326 10:46:21 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1762768 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1762768 ']' 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1762768 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762768 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762768' 00:32:47.326 killing process with pid 1762768 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1762768 00:32:47.326 10:46:21 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1762768 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.226 10:46:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.226 10:46:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:49.226 10:46:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.131 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:51.131 00:32:51.131 real 0m21.989s 00:32:51.131 user 0m27.124s 00:32:51.131 sys 0m6.225s 00:32:51.131 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.131 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.131 ************************************ 00:32:51.131 END TEST nvmf_identify_passthru 00:32:51.131 ************************************ 00:32:51.131 10:46:24 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:51.131 10:46:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:51.131 10:46:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:51.131 10:46:24 -- common/autotest_common.sh@10 -- # set +x 00:32:51.131 ************************************ 00:32:51.131 START TEST nvmf_dif 00:32:51.131 ************************************ 00:32:51.131 10:46:24 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:51.131 * Looking for test storage... 00:32:51.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:51.131 10:46:24 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:51.131 10:46:24 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:32:51.131 10:46:24 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:51.131 10:46:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:51.131 10:46:25 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.131 10:46:25 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:51.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.131 --rc genhtml_branch_coverage=1 00:32:51.131 --rc genhtml_function_coverage=1 00:32:51.131 --rc genhtml_legend=1 00:32:51.131 --rc geninfo_all_blocks=1 00:32:51.131 --rc geninfo_unexecuted_blocks=1 00:32:51.131 00:32:51.131 ' 00:32:51.131 10:46:25 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:51.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.131 --rc genhtml_branch_coverage=1 00:32:51.131 --rc genhtml_function_coverage=1 00:32:51.131 --rc genhtml_legend=1 00:32:51.131 --rc geninfo_all_blocks=1 00:32:51.131 --rc geninfo_unexecuted_blocks=1 00:32:51.131 00:32:51.131 ' 00:32:51.131 10:46:25 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:51.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.131 --rc genhtml_branch_coverage=1 00:32:51.131 --rc genhtml_function_coverage=1 00:32:51.131 --rc genhtml_legend=1 00:32:51.131 --rc geninfo_all_blocks=1 00:32:51.131 --rc geninfo_unexecuted_blocks=1 00:32:51.131 00:32:51.131 ' 00:32:51.131 10:46:25 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:51.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.131 --rc genhtml_branch_coverage=1 00:32:51.131 --rc genhtml_function_coverage=1 00:32:51.131 --rc genhtml_legend=1 00:32:51.131 --rc geninfo_all_blocks=1 00:32:51.131 --rc geninfo_unexecuted_blocks=1 00:32:51.131 00:32:51.131 ' 00:32:51.131 10:46:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.131 10:46:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.131 10:46:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.131 10:46:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.131 10:46:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.131 10:46:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:51.131 10:46:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:51.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.131 10:46:25 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.131 10:46:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:51.132 10:46:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:51.132 10:46:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:51.132 10:46:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:51.132 10:46:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:51.132 10:46:25 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:51.132 10:46:25 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.132 10:46:25 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:51.132 10:46:25 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:51.132 10:46:25 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:51.132 10:46:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.132 10:46:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:51.132 10:46:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.132 10:46:25 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:51.132 10:46:25 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:51.132 10:46:25 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.132 10:46:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:57.699 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:57.699 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:57.699 Found net devices under 0000:af:00.0: cvl_0_0 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:57.699 Found net devices under 0000:af:00.1: cvl_0_1 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:32:57.699 00:32:57.699 --- 10.0.0.2 ping statistics --- 00:32:57.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.699 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:32:57.699 10:46:30 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:32:57.699 00:32:57.699 --- 10.0.0.1 ping statistics --- 00:32:57.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.700 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:32:57.700 10:46:30 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.700 10:46:30 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:57.700 10:46:30 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:57.700 10:46:30 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:59.609 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:59.609 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:59.609 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:59.868 10:46:33 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.868 10:46:33 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:59.868 10:46:33 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:59.868 10:46:33 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.868 10:46:33 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:59.868 10:46:33 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:59.868 10:46:33 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:59.868 10:46:33 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:59.868 10:46:33 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:59.868 10:46:33 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.868 10:46:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:59.868 10:46:33 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1768173 00:32:59.869 10:46:33 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1768173 00:32:59.869 10:46:33 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:59.869 10:46:33 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1768173 ']' 00:32:59.869 10:46:33 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.869 10:46:33 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.869 10:46:33 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.869 10:46:33 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.869 10:46:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:59.869 [2024-12-12 10:46:33.854338] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:32:59.869 [2024-12-12 10:46:33.854381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.127 [2024-12-12 10:46:33.930858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.127 [2024-12-12 10:46:33.971593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.127 [2024-12-12 10:46:33.971629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.127 [2024-12-12 10:46:33.971636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.127 [2024-12-12 10:46:33.971641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.127 [2024-12-12 10:46:33.971646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.127 [2024-12-12 10:46:33.972147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:00.127 10:46:34 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:00.127 10:46:34 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.127 10:46:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:00.127 10:46:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:00.127 [2024-12-12 10:46:34.105647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.127 10:46:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:00.127 10:46:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:00.127 ************************************ 00:33:00.127 START TEST fio_dif_1_default 00:33:00.127 ************************************ 00:33:00.127 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:00.127 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:00.127 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:00.127 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:00.127 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:00.127 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:00.127 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:00.127 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.127 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:00.386 bdev_null0 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:00.386 [2024-12-12 10:46:34.173931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.386 { 00:33:00.386 "params": { 00:33:00.386 "name": "Nvme$subsystem", 00:33:00.386 "trtype": "$TEST_TRANSPORT", 00:33:00.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.386 "adrfam": "ipv4", 00:33:00.386 "trsvcid": "$NVMF_PORT", 00:33:00.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.386 "hdgst": ${hdgst:-false}, 00:33:00.386 "ddgst": ${ddgst:-false} 00:33:00.386 }, 00:33:00.386 "method": "bdev_nvme_attach_controller" 00:33:00.386 } 00:33:00.386 EOF 00:33:00.386 )") 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.386 "params": { 00:33:00.386 "name": "Nvme0", 00:33:00.386 "trtype": "tcp", 00:33:00.386 "traddr": "10.0.0.2", 00:33:00.386 "adrfam": "ipv4", 00:33:00.386 "trsvcid": "4420", 00:33:00.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:00.386 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:00.386 "hdgst": false, 00:33:00.386 "ddgst": false 00:33:00.386 }, 00:33:00.386 "method": "bdev_nvme_attach_controller" 00:33:00.386 }' 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:00.386 10:46:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:00.645 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:00.645 fio-3.35 00:33:00.645 Starting 1 thread 00:33:12.842 00:33:12.842 filename0: (groupid=0, jobs=1): err= 0: pid=1768508: Thu Dec 12 10:46:45 2024 00:33:12.842 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:33:12.842 slat (nsec): min=5593, max=27025, avg=6188.18, stdev=1063.19 00:33:12.842 clat (usec): min=40803, max=43066, avg=40991.68, stdev=140.85 00:33:12.842 lat (usec): min=40810, max=43093, avg=40997.87, stdev=141.35 00:33:12.842 clat percentiles (usec): 00:33:12.842 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:12.842 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:12.842 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:12.842 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:33:12.842 | 99.99th=[43254] 00:33:12.842 bw ( KiB/s): min= 384, max= 416, per=99.71%, avg=389.05, stdev=11.99, samples=19 00:33:12.842 iops : min= 96, max= 104, avg=97.26, stdev= 3.00, samples=19 00:33:12.842 lat (msec) : 50=100.00% 00:33:12.842 cpu : usr=91.86%, sys=7.88%, ctx=30, majf=0, minf=0 00:33:12.842 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:12.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:12.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:12.842 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:12.842 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:12.842 00:33:12.842 Run status group 0 (all jobs): 00:33:12.842 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10007-10007msec 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.842 00:33:12.842 real 0m11.238s 00:33:12.842 user 0m16.531s 00:33:12.842 sys 0m1.081s 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:12.842 ************************************ 00:33:12.842 END TEST fio_dif_1_default 00:33:12.842 ************************************ 00:33:12.842 10:46:45 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:12.842 10:46:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:12.842 10:46:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:12.842 10:46:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:12.842 ************************************ 00:33:12.842 START TEST fio_dif_1_multi_subsystems 00:33:12.842 ************************************ 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:12.842 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:12.843 bdev_null0 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:12.843 [2024-12-12 10:46:45.480765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:12.843 bdev_null1 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:12.843 { 00:33:12.843 "params": { 00:33:12.843 "name": "Nvme$subsystem", 00:33:12.843 "trtype": "$TEST_TRANSPORT", 00:33:12.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:12.843 "adrfam": "ipv4", 00:33:12.843 "trsvcid": "$NVMF_PORT", 00:33:12.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:12.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:12.843 "hdgst": ${hdgst:-false}, 00:33:12.843 "ddgst": ${ddgst:-false} 00:33:12.843 }, 00:33:12.843 "method": "bdev_nvme_attach_controller" 00:33:12.843 } 00:33:12.843 EOF 00:33:12.843 )") 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:12.843 { 00:33:12.843 "params": { 00:33:12.843 "name": "Nvme$subsystem", 00:33:12.843 "trtype": "$TEST_TRANSPORT", 00:33:12.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:12.843 "adrfam": "ipv4", 00:33:12.843 "trsvcid": "$NVMF_PORT", 00:33:12.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:12.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:12.843 "hdgst": ${hdgst:-false}, 00:33:12.843 "ddgst": ${ddgst:-false} 00:33:12.843 }, 00:33:12.843 "method": "bdev_nvme_attach_controller" 00:33:12.843 } 00:33:12.843 EOF 00:33:12.843 )") 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:12.843 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:12.843 "params": { 00:33:12.843 "name": "Nvme0", 00:33:12.843 "trtype": "tcp", 00:33:12.843 "traddr": "10.0.0.2", 00:33:12.843 "adrfam": "ipv4", 00:33:12.843 "trsvcid": "4420", 00:33:12.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:12.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:12.843 "hdgst": false, 00:33:12.843 "ddgst": false 00:33:12.843 }, 00:33:12.843 "method": "bdev_nvme_attach_controller" 00:33:12.843 },{ 00:33:12.843 "params": { 00:33:12.843 "name": "Nvme1", 00:33:12.843 "trtype": "tcp", 00:33:12.843 "traddr": "10.0.0.2", 00:33:12.843 "adrfam": "ipv4", 00:33:12.843 "trsvcid": "4420", 00:33:12.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:12.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:12.843 "hdgst": false, 00:33:12.844 "ddgst": false 00:33:12.844 }, 00:33:12.844 "method": "bdev_nvme_attach_controller" 00:33:12.844 }' 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:12.844 10:46:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:12.844 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:12.844 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:12.844 fio-3.35 00:33:12.844 Starting 2 threads 00:33:22.812 00:33:22.812 filename0: (groupid=0, jobs=1): err= 0: pid=1770495: Thu Dec 12 10:46:56 2024 00:33:22.812 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10006msec) 00:33:22.812 slat (nsec): min=6003, max=33184, avg=7627.73, stdev=2415.68 00:33:22.812 clat (usec): min=415, max=41606, avg=40650.27, stdev=3644.17 00:33:22.812 lat (usec): min=421, max=41613, avg=40657.90, stdev=3644.18 00:33:22.812 clat percentiles (usec): 00:33:22.812 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:22.812 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:22.812 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:22.812 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:33:22.812 | 99.99th=[41681] 00:33:22.812 bw ( KiB/s): min= 384, max= 448, per=33.51%, avg=392.00, stdev=17.60, samples=20 00:33:22.812 iops : min= 96, max= 112, avg=98.00, stdev= 4.40, samples=20 00:33:22.812 lat (usec) : 500=0.81% 00:33:22.812 lat (msec) : 50=99.19% 00:33:22.812 cpu : usr=97.11%, sys=2.64%, ctx=18, majf=0, minf=9 00:33:22.812 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.812 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.812 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:22.812 filename1: (groupid=0, jobs=1): err= 0: pid=1770496: Thu Dec 12 10:46:56 2024 00:33:22.812 read: IOPS=194, BW=777KiB/s (796kB/s)(7792KiB/10026msec) 00:33:22.812 slat (nsec): min=6006, max=29266, avg=7098.94, stdev=2067.87 00:33:22.812 clat (usec): min=380, max=42542, avg=20566.42, stdev=20240.67 00:33:22.812 lat (usec): min=386, max=42549, avg=20573.52, stdev=20240.06 00:33:22.812 clat percentiles (usec): 00:33:22.812 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:33:22.812 | 30.00th=[ 424], 40.00th=[ 515], 50.00th=[ 783], 60.00th=[40633], 00:33:22.812 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:33:22.812 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:22.812 | 99.99th=[42730] 00:33:22.812 bw ( KiB/s): min= 704, max= 832, per=66.42%, avg=777.60, stdev=37.58, samples=20 00:33:22.812 iops : min= 176, max= 208, avg=194.40, stdev= 9.39, samples=20 00:33:22.812 lat (usec) : 500=39.63%, 750=10.06%, 1000=0.41% 00:33:22.812 lat (msec) : 2=0.21%, 50=49.69% 00:33:22.812 cpu : usr=96.68%, sys=3.07%, ctx=14, majf=0, minf=9 00:33:22.812 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.812 issued rwts: total=1948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.812 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:22.812 00:33:22.812 Run status group 0 (all jobs): 00:33:22.812 READ: bw=1170KiB/s (1198kB/s), 393KiB/s-777KiB/s (403kB/s-796kB/s), io=11.5MiB (12.0MB), run=10006-10026msec 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.071 10:46:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.071 10:46:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.071 10:46:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:23.071 10:46:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.071 10:46:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.071 10:46:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.071 00:33:23.071 real 0m11.568s 00:33:23.071 user 0m26.559s 00:33:23.071 sys 0m0.878s 00:33:23.071 10:46:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.071 10:46:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.071 ************************************ 00:33:23.071 END TEST fio_dif_1_multi_subsystems 00:33:23.071 ************************************ 00:33:23.071 10:46:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:23.071 10:46:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:23.071 10:46:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.071 10:46:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:23.071 ************************************ 00:33:23.071 START TEST fio_dif_rand_params 00:33:23.071 ************************************ 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.329 bdev_null0 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.329 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:23.330 [2024-12-12 10:46:57.130902] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.330 { 00:33:23.330 "params": { 00:33:23.330 "name": "Nvme$subsystem", 00:33:23.330 "trtype": "$TEST_TRANSPORT", 00:33:23.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.330 "adrfam": "ipv4", 00:33:23.330 "trsvcid": "$NVMF_PORT", 00:33:23.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.330 "hdgst": ${hdgst:-false}, 00:33:23.330 "ddgst": ${ddgst:-false} 00:33:23.330 }, 00:33:23.330 "method": "bdev_nvme_attach_controller" 00:33:23.330 } 00:33:23.330 EOF 00:33:23.330 )") 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:23.330 "params": { 00:33:23.330 "name": "Nvme0", 00:33:23.330 "trtype": "tcp", 00:33:23.330 "traddr": "10.0.0.2", 00:33:23.330 "adrfam": "ipv4", 00:33:23.330 "trsvcid": "4420", 00:33:23.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:23.330 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:23.330 "hdgst": false, 00:33:23.330 "ddgst": false 00:33:23.330 }, 00:33:23.330 "method": "bdev_nvme_attach_controller" 00:33:23.330 }' 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:23.330 10:46:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.589 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:23.589 ... 00:33:23.589 fio-3.35 00:33:23.589 Starting 3 threads 00:33:30.150 00:33:30.150 filename0: (groupid=0, jobs=1): err= 0: pid=1772342: Thu Dec 12 10:47:03 2024 00:33:30.150 read: IOPS=315, BW=39.5MiB/s (41.4MB/s)(199MiB/5046msec) 00:33:30.150 slat (nsec): min=6202, max=33123, avg=10986.54, stdev=2685.37 00:33:30.150 clat (usec): min=3526, max=50277, avg=9455.42, stdev=5563.93 00:33:30.150 lat (usec): min=3533, max=50289, avg=9466.41, stdev=5563.79 00:33:30.150 clat percentiles (usec): 00:33:30.150 | 1.00th=[ 4293], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7832], 00:33:30.150 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:33:30.150 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[11076], 00:33:30.150 | 99.00th=[47449], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:33:30.150 | 99.99th=[50070] 00:33:30.150 bw ( KiB/s): min=32320, max=45824, per=35.22%, avg=40751.60, stdev=3813.92, samples=10 00:33:30.150 iops : min= 252, max= 358, avg=318.20, stdev=29.97, samples=10 00:33:30.150 lat (msec) : 4=0.63%, 10=82.12%, 20=15.24%, 50=1.94%, 100=0.06% 00:33:30.150 cpu : usr=94.33%, sys=5.35%, ctx=10, majf=0, minf=60 00:33:30.150 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.150 issued rwts: total=1594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.150 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:30.150 filename0: (groupid=0, jobs=1): err= 0: pid=1772343: Thu Dec 12 10:47:03 2024 00:33:30.150 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(177MiB/5004msec) 00:33:30.150 slat (nsec): min=6237, max=54475, avg=11360.87, stdev=3635.38 00:33:30.150 clat (usec): min=2901, max=51664, avg=10601.88, stdev=6275.73 00:33:30.150 lat (usec): min=2908, max=51679, avg=10613.24, stdev=6275.68 00:33:30.150 clat percentiles (usec): 00:33:30.150 | 1.00th=[ 3589], 5.00th=[ 6194], 10.00th=[ 7570], 20.00th=[ 8586], 00:33:30.150 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10290], 00:33:30.150 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11863], 95.00th=[12649], 00:33:30.150 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50070], 99.95th=[51643], 00:33:30.150 | 99.99th=[51643] 00:33:30.150 bw ( KiB/s): min=28928, max=40192, per=31.24%, avg=36147.20, stdev=3775.55, samples=10 00:33:30.150 iops : min= 226, max= 314, avg=282.40, stdev=29.50, samples=10 00:33:30.150 lat (msec) : 4=3.39%, 10=50.07%, 20=43.99%, 50=2.48%, 100=0.07% 00:33:30.150 cpu : usr=94.24%, sys=5.26%, ctx=14, majf=0, minf=19 00:33:30.150 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.150 issued rwts: total=1414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.150 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:30.150 filename0: (groupid=0, jobs=1): err= 0: pid=1772344: Thu Dec 12 10:47:03 2024 00:33:30.150 read: IOPS=307, BW=38.5MiB/s (40.4MB/s)(194MiB/5043msec) 00:33:30.150 slat (nsec): min=6281, max=33777, avg=11129.76, stdev=2744.64 00:33:30.150 clat (usec): min=3550, max=49793, avg=9702.50, stdev=5018.60 00:33:30.151 lat (usec): min=3557, max=49805, avg=9713.63, stdev=5018.51 00:33:30.151 clat percentiles (usec): 00:33:30.151 | 1.00th=[ 3752], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7963], 00:33:30.151 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:33:30.151 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[11731], 00:33:30.151 | 99.00th=[44827], 99.50th=[46924], 99.90th=[49021], 99.95th=[49546], 00:33:30.151 | 99.99th=[49546] 00:33:30.151 bw ( KiB/s): min=24064, max=47104, per=34.32%, avg=39705.60, stdev=6096.95, samples=10 00:33:30.151 iops : min= 188, max= 368, avg=310.20, stdev=47.63, samples=10 00:33:30.151 lat (msec) : 4=2.06%, 10=65.10%, 20=31.17%, 50=1.67% 00:33:30.151 cpu : usr=93.57%, sys=6.13%, ctx=9, majf=0, minf=66 00:33:30.151 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.151 issued rwts: total=1553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.151 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:30.151 00:33:30.151 Run status group 0 (all jobs): 00:33:30.151 READ: bw=113MiB/s (118MB/s), 35.3MiB/s-39.5MiB/s (37.0MB/s-41.4MB/s), io=570MiB (598MB), run=5004-5046msec 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 bdev_null0 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 [2024-12-12 10:47:03.292144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 bdev_null1 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 bdev_null2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:30.151 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.151 { 00:33:30.152 "params": { 00:33:30.152 "name": "Nvme$subsystem", 00:33:30.152 "trtype": "$TEST_TRANSPORT", 00:33:30.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.152 "adrfam": "ipv4", 00:33:30.152 "trsvcid": "$NVMF_PORT", 00:33:30.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.152 "hdgst": ${hdgst:-false}, 00:33:30.152 "ddgst": ${ddgst:-false} 00:33:30.152 }, 00:33:30.152 "method": "bdev_nvme_attach_controller" 00:33:30.152 } 00:33:30.152 EOF 00:33:30.152 )") 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.152 { 00:33:30.152 "params": { 00:33:30.152 "name": "Nvme$subsystem", 00:33:30.152 "trtype": "$TEST_TRANSPORT", 00:33:30.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.152 "adrfam": "ipv4", 00:33:30.152 "trsvcid": "$NVMF_PORT", 00:33:30.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.152 "hdgst": ${hdgst:-false}, 00:33:30.152 "ddgst": ${ddgst:-false} 00:33:30.152 }, 00:33:30.152 "method": "bdev_nvme_attach_controller" 00:33:30.152 } 00:33:30.152 EOF 00:33:30.152 )") 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.152 { 00:33:30.152 "params": { 00:33:30.152 "name": "Nvme$subsystem", 00:33:30.152 "trtype": "$TEST_TRANSPORT", 00:33:30.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.152 "adrfam": "ipv4", 00:33:30.152 "trsvcid": "$NVMF_PORT", 00:33:30.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.152 "hdgst": ${hdgst:-false}, 00:33:30.152 "ddgst": ${ddgst:-false} 00:33:30.152 }, 00:33:30.152 "method": "bdev_nvme_attach_controller" 00:33:30.152 } 00:33:30.152 EOF 00:33:30.152 )") 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:30.152 "params": { 00:33:30.152 "name": "Nvme0", 00:33:30.152 "trtype": "tcp", 00:33:30.152 "traddr": "10.0.0.2", 00:33:30.152 "adrfam": "ipv4", 00:33:30.152 "trsvcid": "4420", 00:33:30.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:30.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:30.152 "hdgst": false, 00:33:30.152 "ddgst": false 00:33:30.152 }, 00:33:30.152 "method": "bdev_nvme_attach_controller" 00:33:30.152 },{ 00:33:30.152 "params": { 00:33:30.152 "name": "Nvme1", 00:33:30.152 "trtype": "tcp", 00:33:30.152 "traddr": "10.0.0.2", 00:33:30.152 "adrfam": "ipv4", 00:33:30.152 "trsvcid": "4420", 00:33:30.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:30.152 "hdgst": false, 00:33:30.152 "ddgst": false 00:33:30.152 }, 00:33:30.152 "method": "bdev_nvme_attach_controller" 00:33:30.152 },{ 00:33:30.152 "params": { 00:33:30.152 "name": "Nvme2", 00:33:30.152 "trtype": "tcp", 00:33:30.152 "traddr": "10.0.0.2", 00:33:30.152 "adrfam": "ipv4", 00:33:30.152 "trsvcid": "4420", 00:33:30.152 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:30.152 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:30.152 "hdgst": false, 00:33:30.152 "ddgst": false 00:33:30.152 }, 00:33:30.152 "method": "bdev_nvme_attach_controller" 00:33:30.152 }' 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:30.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.152 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:30.152 ... 00:33:30.152 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:30.152 ... 00:33:30.152 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:30.152 ... 00:33:30.152 fio-3.35 00:33:30.152 Starting 24 threads 00:33:42.405 00:33:42.405 filename0: (groupid=0, jobs=1): err= 0: pid=1773575: Thu Dec 12 10:47:14 2024 00:33:42.405 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10010msec) 00:33:42.405 slat (nsec): min=4206, max=68265, avg=16362.96, stdev=6769.62 00:33:42.405 clat (usec): min=16826, max=43981, avg=30284.57, stdev=2081.42 00:33:42.405 lat (usec): min=16836, max=43990, avg=30300.93, stdev=2080.98 00:33:42.405 clat percentiles (usec): 00:33:42.405 | 1.00th=[17957], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:33:42.405 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.405 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.405 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:33:42.405 | 99.99th=[43779] 00:33:42.405 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2101.89, stdev=63.38, samples=19 00:33:42.405 iops : min= 512, max= 544, avg=525.47, stdev=15.84, samples=19 00:33:42.405 lat (msec) : 20=1.52%, 50=98.48% 00:33:42.405 cpu : usr=98.40%, sys=1.23%, ctx=13, majf=0, minf=26 00:33:42.405 IO depths : 1=5.8%, 2=11.9%, 4=24.5%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:33:42.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.405 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.405 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.405 filename0: (groupid=0, jobs=1): err= 0: pid=1773576: Thu Dec 12 10:47:14 2024 00:33:42.405 read: IOPS=523, BW=2095KiB/s (2145kB/s)(20.5MiB/10003msec) 00:33:42.406 slat (usec): min=7, max=119, avg=37.65, stdev=25.83 00:33:42.406 clat (usec): min=18799, max=51694, avg=30181.78, stdev=1798.00 00:33:42.406 lat (usec): min=18807, max=51710, avg=30219.42, stdev=1797.08 00:33:42.406 clat percentiles (usec): 00:33:42.406 | 1.00th=[24511], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:33:42.406 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:42.406 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:42.406 | 99.00th=[36963], 99.50th=[38536], 99.90th=[51643], 99.95th=[51643], 00:33:42.406 | 99.99th=[51643] 00:33:42.406 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2090.95, stdev=73.91, samples=19 00:33:42.406 iops : min= 480, max= 544, avg=522.74, stdev=18.48, samples=19 00:33:42.406 lat (msec) : 20=0.42%, 50=99.27%, 100=0.31% 00:33:42.406 cpu : usr=98.57%, sys=1.04%, ctx=8, majf=0, minf=33 00:33:42.406 IO depths : 1=6.0%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:42.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 issued rwts: total=5238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.406 filename0: (groupid=0, jobs=1): err= 0: pid=1773577: Thu Dec 12 10:47:14 2024 00:33:42.406 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:33:42.406 slat (usec): min=4, max=102, avg=43.08, stdev=23.40 00:33:42.406 clat (usec): min=12233, max=58685, avg=30120.92, stdev=1750.07 00:33:42.406 lat (usec): min=12247, max=58698, avg=30164.00, stdev=1747.97 00:33:42.406 clat percentiles (usec): 00:33:42.406 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:33:42.406 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:42.406 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:33:42.406 | 99.00th=[31327], 99.50th=[31589], 99.90th=[52691], 99.95th=[52691], 00:33:42.406 | 99.99th=[58459] 00:33:42.406 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2092.15, stdev=74.45, samples=20 00:33:42.406 iops : min= 480, max= 544, avg=523.00, stdev=18.58, samples=20 00:33:42.406 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:33:42.406 cpu : usr=98.66%, sys=0.92%, ctx=14, majf=0, minf=31 00:33:42.406 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:42.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.406 filename0: (groupid=0, jobs=1): err= 0: pid=1773578: Thu Dec 12 10:47:14 2024 00:33:42.406 read: IOPS=533, BW=2136KiB/s (2187kB/s)(20.9MiB/10008msec) 00:33:42.406 slat (nsec): min=7606, max=56407, avg=11981.66, stdev=4501.36 00:33:42.406 clat (usec): min=3491, max=43736, avg=29859.69, stdev=3432.36 00:33:42.406 lat (usec): min=3508, max=43745, avg=29871.68, stdev=3431.30 00:33:42.406 clat percentiles (usec): 00:33:42.406 | 1.00th=[ 5407], 5.00th=[29754], 10.00th=[30278], 20.00th=[30278], 00:33:42.406 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:42.406 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.406 | 99.00th=[31589], 99.50th=[31851], 99.90th=[31851], 99.95th=[36963], 00:33:42.406 | 99.99th=[43779] 00:33:42.406 bw ( KiB/s): min= 2048, max= 2816, per=4.22%, avg=2131.20, stdev=172.61, samples=20 00:33:42.406 iops : min= 512, max= 704, avg=532.80, stdev=43.15, samples=20 00:33:42.406 lat (msec) : 4=0.56%, 10=0.64%, 20=1.53%, 50=97.27% 00:33:42.406 cpu : usr=98.25%, sys=1.32%, ctx=47, majf=0, minf=41 00:33:42.406 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:42.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.406 filename0: (groupid=0, jobs=1): err= 0: pid=1773579: Thu Dec 12 10:47:14 2024 00:33:42.406 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10017msec) 00:33:42.406 slat (usec): min=9, max=100, avg=34.81, stdev=15.95 00:33:42.406 clat (usec): min=10227, max=31974, avg=30078.51, stdev=1698.97 00:33:42.406 lat (usec): min=10240, max=31989, avg=30113.32, stdev=1698.55 00:33:42.406 clat percentiles (usec): 00:33:42.406 | 1.00th=[19792], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:42.406 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.406 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.406 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:33:42.406 | 99.99th=[31851] 00:33:42.406 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2105.60, stdev=77.42, samples=20 00:33:42.406 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:33:42.406 lat (msec) : 20=1.12%, 50=98.88% 00:33:42.406 cpu : usr=98.12%, sys=1.43%, ctx=31, majf=0, minf=32 00:33:42.406 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:42.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.406 filename0: (groupid=0, jobs=1): err= 0: pid=1773580: Thu Dec 12 10:47:14 2024 00:33:42.406 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:33:42.406 slat (nsec): min=4386, max=75573, avg=20464.86, stdev=10496.82 00:33:42.406 clat (usec): min=12236, max=51482, avg=30310.80, stdev=1651.24 00:33:42.406 lat (usec): min=12243, max=51495, avg=30331.26, stdev=1651.20 00:33:42.406 clat percentiles (usec): 00:33:42.406 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:33:42.406 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.406 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.406 | 99.00th=[31327], 99.50th=[32113], 99.90th=[51643], 99.95th=[51643], 00:33:42.406 | 99.99th=[51643] 00:33:42.406 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2092.15, stdev=74.45, samples=20 00:33:42.406 iops : min= 480, max= 544, avg=523.00, stdev=18.58, samples=20 00:33:42.406 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:33:42.406 cpu : usr=98.72%, sys=0.91%, ctx=13, majf=0, minf=34 00:33:42.406 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:42.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.406 filename0: (groupid=0, jobs=1): err= 0: pid=1773581: Thu Dec 12 10:47:14 2024 00:33:42.406 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10017msec) 00:33:42.406 slat (usec): min=8, max=121, avg=43.31, stdev=23.60 00:33:42.406 clat (usec): min=9623, max=40713, avg=29970.44, stdev=1876.34 00:33:42.406 lat (usec): min=9643, max=40734, avg=30013.75, stdev=1877.42 00:33:42.406 clat percentiles (usec): 00:33:42.406 | 1.00th=[19792], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:42.406 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:42.406 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:42.406 | 99.00th=[31327], 99.50th=[31851], 99.90th=[40633], 99.95th=[40633], 00:33:42.406 | 99.99th=[40633] 00:33:42.406 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2105.60, stdev=77.42, samples=20 00:33:42.406 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:33:42.406 lat (msec) : 10=0.04%, 20=1.17%, 50=98.79% 00:33:42.406 cpu : usr=98.66%, sys=0.95%, ctx=16, majf=0, minf=27 00:33:42.406 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:42.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.406 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.406 filename0: (groupid=0, jobs=1): err= 0: pid=1773582: Thu Dec 12 10:47:14 2024 00:33:42.406 read: IOPS=525, BW=2101KiB/s (2152kB/s)(20.5MiB/10002msec) 00:33:42.406 slat (nsec): min=6031, max=47670, avg=17712.32, stdev=6790.30 00:33:42.406 clat (usec): min=15111, max=55323, avg=30311.13, stdev=1698.48 00:33:42.406 lat (usec): min=15122, max=55340, avg=30328.84, stdev=1698.38 00:33:42.406 clat percentiles (usec): 00:33:42.406 | 1.00th=[23725], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:33:42.406 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.406 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.406 | 99.00th=[31589], 99.50th=[37487], 99.90th=[48497], 99.95th=[55313], 00:33:42.406 | 99.99th=[55313] 00:33:42.406 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2097.68, stdev=75.59, samples=19 00:33:42.406 iops : min= 480, max= 544, avg=524.42, stdev=18.90, samples=19 00:33:42.406 lat (msec) : 20=0.61%, 50=99.31%, 100=0.08% 00:33:42.406 cpu : usr=98.43%, sys=1.19%, ctx=16, majf=0, minf=25 00:33:42.406 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.407 filename1: (groupid=0, jobs=1): err= 0: pid=1773583: Thu Dec 12 10:47:14 2024 00:33:42.407 read: IOPS=525, BW=2102KiB/s (2152kB/s)(20.5MiB/10004msec) 00:33:42.407 slat (usec): min=4, max=126, avg=37.71, stdev=25.73 00:33:42.407 clat (usec): min=17629, max=65124, avg=30075.57, stdev=2470.54 00:33:42.407 lat (usec): min=17645, max=65137, avg=30113.28, stdev=2470.82 00:33:42.407 clat percentiles (usec): 00:33:42.407 | 1.00th=[20055], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:42.407 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:42.407 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:42.407 | 99.00th=[42206], 99.50th=[43254], 99.90th=[53216], 99.95th=[53216], 00:33:42.407 | 99.99th=[65274] 00:33:42.407 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2098.53, stdev=60.40, samples=19 00:33:42.407 iops : min= 512, max= 544, avg=524.63, stdev=15.10, samples=19 00:33:42.407 lat (msec) : 20=1.01%, 50=98.69%, 100=0.30% 00:33:42.407 cpu : usr=98.37%, sys=1.24%, ctx=18, majf=0, minf=33 00:33:42.407 IO depths : 1=5.5%, 2=11.3%, 4=23.6%, 8=52.4%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 issued rwts: total=5256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.407 filename1: (groupid=0, jobs=1): err= 0: pid=1773584: Thu Dec 12 10:47:14 2024 00:33:42.407 read: IOPS=524, BW=2099KiB/s (2150kB/s)(20.5MiB/10003msec) 00:33:42.407 slat (usec): min=4, max=120, avg=37.87, stdev=26.09 00:33:42.407 clat (usec): min=18825, max=51729, avg=30122.04, stdev=1951.13 00:33:42.407 lat (usec): min=18833, max=51742, avg=30159.90, stdev=1950.90 00:33:42.407 clat percentiles (usec): 00:33:42.407 | 1.00th=[22152], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:42.407 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:42.407 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:42.407 | 99.00th=[34341], 99.50th=[38536], 99.90th=[51643], 99.95th=[51643], 00:33:42.407 | 99.99th=[51643] 00:33:42.407 bw ( KiB/s): min= 1920, max= 2320, per=4.15%, avg=2096.00, stdev=89.72, samples=19 00:33:42.407 iops : min= 480, max= 580, avg=524.00, stdev=22.43, samples=19 00:33:42.407 lat (msec) : 20=0.95%, 50=98.74%, 100=0.30% 00:33:42.407 cpu : usr=98.59%, sys=1.03%, ctx=13, majf=0, minf=26 00:33:42.407 IO depths : 1=5.9%, 2=12.0%, 4=24.3%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 issued rwts: total=5250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.407 filename1: (groupid=0, jobs=1): err= 0: pid=1773585: Thu Dec 12 10:47:14 2024 00:33:42.407 read: IOPS=532, BW=2130KiB/s (2181kB/s)(20.8MiB/10004msec) 00:33:42.407 slat (usec): min=7, max=123, avg=27.41, stdev=21.40 00:33:42.407 clat (usec): min=3456, max=42012, avg=29841.77, stdev=3342.37 00:33:42.407 lat (usec): min=3467, max=42051, avg=29869.18, stdev=3342.38 00:33:42.407 clat percentiles (usec): 00:33:42.407 | 1.00th=[ 5145], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:42.407 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.407 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.407 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32113], 99.95th=[41157], 00:33:42.407 | 99.99th=[42206] 00:33:42.407 bw ( KiB/s): min= 2048, max= 2693, per=4.21%, avg=2125.05, stdev=147.25, samples=20 00:33:42.407 iops : min= 512, max= 673, avg=531.25, stdev=36.76, samples=20 00:33:42.407 lat (msec) : 4=0.47%, 10=0.73%, 20=1.28%, 50=97.52% 00:33:42.407 cpu : usr=98.35%, sys=1.28%, ctx=15, majf=0, minf=50 00:33:42.407 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.407 filename1: (groupid=0, jobs=1): err= 0: pid=1773586: Thu Dec 12 10:47:14 2024 00:33:42.407 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10011msec) 00:33:42.407 slat (usec): min=4, max=109, avg=37.17, stdev=16.69 00:33:42.407 clat (usec): min=15656, max=32054, avg=30086.95, stdev=1157.40 00:33:42.407 lat (usec): min=15669, max=32089, avg=30124.12, stdev=1158.64 00:33:42.407 clat percentiles (usec): 00:33:42.407 | 1.00th=[26608], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:42.407 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:42.407 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:42.407 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:33:42.407 | 99.99th=[32113] 00:33:42.407 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2095.16, stdev=63.44, samples=19 00:33:42.407 iops : min= 512, max= 544, avg=523.79, stdev=15.86, samples=19 00:33:42.407 lat (msec) : 20=0.61%, 50=99.39% 00:33:42.407 cpu : usr=98.49%, sys=1.11%, ctx=34, majf=0, minf=31 00:33:42.407 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.407 filename1: (groupid=0, jobs=1): err= 0: pid=1773587: Thu Dec 12 10:47:14 2024 00:33:42.407 read: IOPS=524, BW=2100KiB/s (2150kB/s)(20.5MiB/10008msec) 00:33:42.407 slat (nsec): min=5570, max=84637, avg=22256.01, stdev=12473.61 00:33:42.407 clat (usec): min=8050, max=55075, avg=30278.19, stdev=2178.50 00:33:42.407 lat (usec): min=8078, max=55091, avg=30300.44, stdev=2177.67 00:33:42.407 clat percentiles (usec): 00:33:42.407 | 1.00th=[20317], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:33:42.407 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.407 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.407 | 99.00th=[37487], 99.50th=[38536], 99.90th=[54789], 99.95th=[54789], 00:33:42.407 | 99.99th=[55313] 00:33:42.407 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2095.20, stdev=74.41, samples=20 00:33:42.407 iops : min= 480, max= 544, avg=523.80, stdev=18.60, samples=20 00:33:42.407 lat (msec) : 10=0.30%, 20=0.42%, 50=99.11%, 100=0.17% 00:33:42.407 cpu : usr=98.46%, sys=1.11%, ctx=38, majf=0, minf=24 00:33:42.407 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.407 filename1: (groupid=0, jobs=1): err= 0: pid=1773588: Thu Dec 12 10:47:14 2024 00:33:42.407 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10016msec) 00:33:42.407 slat (usec): min=7, max=127, avg=44.10, stdev=24.15 00:33:42.407 clat (usec): min=14525, max=40686, avg=30036.78, stdev=1241.86 00:33:42.407 lat (usec): min=14535, max=40710, avg=30080.88, stdev=1242.68 00:33:42.407 clat percentiles (usec): 00:33:42.407 | 1.00th=[25560], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:42.407 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:42.407 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:42.407 | 99.00th=[31327], 99.50th=[31589], 99.90th=[40633], 99.95th=[40633], 00:33:42.407 | 99.99th=[40633] 00:33:42.407 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2095.16, stdev=63.44, samples=19 00:33:42.407 iops : min= 512, max= 544, avg=523.79, stdev=15.86, samples=19 00:33:42.407 lat (msec) : 20=0.68%, 50=99.32% 00:33:42.407 cpu : usr=98.48%, sys=1.13%, ctx=14, majf=0, minf=32 00:33:42.407 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.407 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.407 filename1: (groupid=0, jobs=1): err= 0: pid=1773589: Thu Dec 12 10:47:14 2024 00:33:42.407 read: IOPS=526, BW=2108KiB/s (2159kB/s)(20.6MiB/10019msec) 00:33:42.407 slat (nsec): min=9026, max=97617, avg=37121.33, stdev=14430.75 00:33:42.407 clat (usec): min=9548, max=31948, avg=30041.55, stdev=1688.74 00:33:42.407 lat (usec): min=9566, max=31966, avg=30078.67, stdev=1689.31 00:33:42.407 clat percentiles (usec): 00:33:42.407 | 1.00th=[20055], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:42.407 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:42.407 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.407 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:33:42.407 | 99.99th=[31851] 00:33:42.407 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2105.60, stdev=77.42, samples=20 00:33:42.408 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:33:42.408 lat (msec) : 10=0.04%, 20=0.97%, 50=99.00% 00:33:42.408 cpu : usr=98.52%, sys=1.04%, ctx=40, majf=0, minf=34 00:33:42.408 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.408 filename1: (groupid=0, jobs=1): err= 0: pid=1773590: Thu Dec 12 10:47:14 2024 00:33:42.408 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10004msec) 00:33:42.408 slat (usec): min=6, max=123, avg=37.70, stdev=26.42 00:33:42.408 clat (usec): min=19141, max=51671, avg=30227.01, stdev=1513.16 00:33:42.408 lat (usec): min=19159, max=51685, avg=30264.71, stdev=1510.91 00:33:42.408 clat percentiles (usec): 00:33:42.408 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:33:42.408 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:42.408 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.408 | 99.00th=[31065], 99.50th=[37487], 99.90th=[51643], 99.95th=[51643], 00:33:42.408 | 99.99th=[51643] 00:33:42.408 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:42.408 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:42.408 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:33:42.408 cpu : usr=98.38%, sys=1.24%, ctx=14, majf=0, minf=26 00:33:42.408 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.408 filename2: (groupid=0, jobs=1): err= 0: pid=1773591: Thu Dec 12 10:47:14 2024 00:33:42.408 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10018msec) 00:33:42.408 slat (usec): min=8, max=127, avg=44.63, stdev=23.97 00:33:42.408 clat (usec): min=9550, max=32062, avg=29955.30, stdev=1691.47 00:33:42.408 lat (usec): min=9559, max=32078, avg=29999.93, stdev=1692.56 00:33:42.408 clat percentiles (usec): 00:33:42.408 | 1.00th=[20055], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:42.408 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:42.408 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:42.408 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[32113], 00:33:42.408 | 99.99th=[32113] 00:33:42.408 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2105.60, stdev=77.42, samples=20 00:33:42.408 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:33:42.408 lat (msec) : 10=0.04%, 20=0.98%, 50=98.98% 00:33:42.408 cpu : usr=98.59%, sys=1.02%, ctx=16, majf=0, minf=34 00:33:42.408 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.408 filename2: (groupid=0, jobs=1): err= 0: pid=1773592: Thu Dec 12 10:47:14 2024 00:33:42.408 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:33:42.408 slat (nsec): min=7594, max=43610, avg=16203.91, stdev=6263.92 00:33:42.408 clat (usec): min=17879, max=65913, avg=30432.87, stdev=2250.11 00:33:42.408 lat (usec): min=17897, max=65929, avg=30449.07, stdev=2249.86 00:33:42.408 clat percentiles (usec): 00:33:42.408 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:33:42.408 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.408 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.408 | 99.00th=[31589], 99.50th=[42730], 99.90th=[65799], 99.95th=[65799], 00:33:42.408 | 99.99th=[65799] 00:33:42.408 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:42.408 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:42.408 lat (msec) : 20=0.48%, 50=99.22%, 100=0.31% 00:33:42.408 cpu : usr=98.45%, sys=1.17%, ctx=10, majf=0, minf=23 00:33:42.408 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.408 filename2: (groupid=0, jobs=1): err= 0: pid=1773593: Thu Dec 12 10:47:14 2024 00:33:42.408 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10008msec) 00:33:42.408 slat (usec): min=7, max=119, avg=36.69, stdev=27.46 00:33:42.408 clat (usec): min=9461, max=82137, avg=30178.89, stdev=3064.45 00:33:42.408 lat (usec): min=9471, max=82153, avg=30215.58, stdev=3063.58 00:33:42.408 clat percentiles (usec): 00:33:42.408 | 1.00th=[20317], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:42.408 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.408 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.408 | 99.00th=[36963], 99.50th=[46924], 99.90th=[65799], 99.95th=[65799], 00:33:42.408 | 99.99th=[82314] 00:33:42.408 bw ( KiB/s): min= 1840, max= 2224, per=4.15%, avg=2093.60, stdev=92.90, samples=20 00:33:42.408 iops : min= 460, max= 556, avg=523.40, stdev=23.23, samples=20 00:33:42.408 lat (msec) : 10=0.23%, 20=0.74%, 50=98.72%, 100=0.30% 00:33:42.408 cpu : usr=98.23%, sys=1.40%, ctx=11, majf=0, minf=29 00:33:42.408 IO depths : 1=5.4%, 2=11.3%, 4=24.1%, 8=52.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 issued rwts: total=5246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.408 filename2: (groupid=0, jobs=1): err= 0: pid=1773594: Thu Dec 12 10:47:14 2024 00:33:42.408 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10017msec) 00:33:42.408 slat (usec): min=8, max=124, avg=41.24, stdev=24.37 00:33:42.408 clat (usec): min=10392, max=40941, avg=30030.87, stdev=1720.23 00:33:42.408 lat (usec): min=10406, max=40977, avg=30072.11, stdev=1719.45 00:33:42.408 clat percentiles (usec): 00:33:42.408 | 1.00th=[19792], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:42.408 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.408 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.408 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:33:42.408 | 99.99th=[41157] 00:33:42.408 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2105.60, stdev=77.42, samples=20 00:33:42.408 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:33:42.408 lat (msec) : 20=1.12%, 50=98.88% 00:33:42.408 cpu : usr=98.47%, sys=1.14%, ctx=12, majf=0, minf=29 00:33:42.408 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.408 filename2: (groupid=0, jobs=1): err= 0: pid=1773595: Thu Dec 12 10:47:14 2024 00:33:42.408 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10016msec) 00:33:42.408 slat (usec): min=8, max=126, avg=42.88, stdev=23.88 00:33:42.408 clat (usec): min=15630, max=40636, avg=30043.40, stdev=1128.63 00:33:42.408 lat (usec): min=15643, max=40660, avg=30086.28, stdev=1129.29 00:33:42.408 clat percentiles (usec): 00:33:42.408 | 1.00th=[26084], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:42.408 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:42.408 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:42.408 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[32113], 00:33:42.408 | 99.99th=[40633] 00:33:42.408 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2095.16, stdev=63.44, samples=19 00:33:42.408 iops : min= 512, max= 544, avg=523.79, stdev=15.86, samples=19 00:33:42.408 lat (msec) : 20=0.65%, 50=99.35% 00:33:42.408 cpu : usr=98.65%, sys=0.97%, ctx=13, majf=0, minf=24 00:33:42.408 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.408 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.408 filename2: (groupid=0, jobs=1): err= 0: pid=1773596: Thu Dec 12 10:47:14 2024 00:33:42.408 read: IOPS=533, BW=2132KiB/s (2183kB/s)(20.8MiB/10009msec) 00:33:42.408 slat (usec): min=7, max=127, avg=26.16, stdev=22.24 00:33:42.408 clat (usec): min=4035, max=32042, avg=29819.44, stdev=3354.52 00:33:42.409 lat (usec): min=4045, max=32090, avg=29845.60, stdev=3354.98 00:33:42.409 clat percentiles (usec): 00:33:42.409 | 1.00th=[ 7898], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:33:42.409 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.409 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:42.409 | 99.00th=[31589], 99.50th=[31589], 99.90th=[31851], 99.95th=[32113], 00:33:42.409 | 99.99th=[32113] 00:33:42.409 bw ( KiB/s): min= 2048, max= 2744, per=4.21%, avg=2127.60, stdev=157.68, samples=20 00:33:42.409 iops : min= 512, max= 686, avg=531.90, stdev=39.42, samples=20 00:33:42.409 lat (msec) : 10=1.46%, 20=1.07%, 50=97.47% 00:33:42.409 cpu : usr=98.47%, sys=1.14%, ctx=21, majf=0, minf=35 00:33:42.409 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.409 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.409 issued rwts: total=5335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.409 filename2: (groupid=0, jobs=1): err= 0: pid=1773597: Thu Dec 12 10:47:14 2024 00:33:42.409 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10007msec) 00:33:42.409 slat (usec): min=7, max=121, avg=29.71, stdev=24.14 00:33:42.409 clat (usec): min=8067, max=65347, avg=30253.01, stdev=2796.72 00:33:42.409 lat (usec): min=8075, max=65386, avg=30282.72, stdev=2796.05 00:33:42.409 clat percentiles (usec): 00:33:42.409 | 1.00th=[20317], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:42.409 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:42.409 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.409 | 99.00th=[38011], 99.50th=[41157], 99.90th=[65274], 99.95th=[65274], 00:33:42.409 | 99.99th=[65274] 00:33:42.409 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2092.95, stdev=66.80, samples=20 00:33:42.409 iops : min= 480, max= 544, avg=523.20, stdev=16.80, samples=20 00:33:42.409 lat (msec) : 10=0.30%, 20=0.30%, 50=99.09%, 100=0.30% 00:33:42.409 cpu : usr=98.47%, sys=1.14%, ctx=13, majf=0, minf=27 00:33:42.409 IO depths : 1=4.6%, 2=9.5%, 4=19.8%, 8=57.1%, 16=9.0%, 32=0.0%, >=64=0.0% 00:33:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.409 complete : 0=0.0%, 4=93.0%, 8=2.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.409 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.409 filename2: (groupid=0, jobs=1): err= 0: pid=1773598: Thu Dec 12 10:47:14 2024 00:33:42.409 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10007msec) 00:33:42.409 slat (usec): min=7, max=106, avg=42.32, stdev=21.63 00:33:42.409 clat (usec): min=6892, max=65210, avg=30132.08, stdev=2418.78 00:33:42.409 lat (usec): min=6900, max=65255, avg=30174.41, stdev=2418.07 00:33:42.409 clat percentiles (usec): 00:33:42.409 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:33:42.409 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:42.409 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:42.409 | 99.00th=[31327], 99.50th=[31589], 99.90th=[65274], 99.95th=[65274], 00:33:42.409 | 99.99th=[65274] 00:33:42.409 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2092.80, stdev=75.15, samples=20 00:33:42.409 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:33:42.409 lat (msec) : 10=0.30%, 20=0.19%, 50=99.20%, 100=0.30% 00:33:42.409 cpu : usr=98.68%, sys=0.92%, ctx=12, majf=0, minf=26 00:33:42.409 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.409 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.409 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:42.409 00:33:42.409 Run status group 0 (all jobs): 00:33:42.409 READ: bw=49.3MiB/s (51.7MB/s), 2092KiB/s-2136KiB/s (2142kB/s-2187kB/s), io=494MiB (518MB), run=10002-10019msec 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.409 bdev_null0 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.409 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.410 [2024-12-12 10:47:15.041496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.410 bdev_null1 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:42.410 { 00:33:42.410 "params": { 00:33:42.410 "name": "Nvme$subsystem", 00:33:42.410 "trtype": "$TEST_TRANSPORT", 00:33:42.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.410 "adrfam": "ipv4", 00:33:42.410 "trsvcid": "$NVMF_PORT", 00:33:42.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.410 "hdgst": ${hdgst:-false}, 00:33:42.410 "ddgst": ${ddgst:-false} 00:33:42.410 }, 00:33:42.410 "method": "bdev_nvme_attach_controller" 00:33:42.410 } 00:33:42.410 EOF 00:33:42.410 )") 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:42.410 { 00:33:42.410 "params": { 00:33:42.410 "name": "Nvme$subsystem", 00:33:42.410 "trtype": "$TEST_TRANSPORT", 00:33:42.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.410 "adrfam": "ipv4", 00:33:42.410 "trsvcid": "$NVMF_PORT", 00:33:42.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.410 "hdgst": ${hdgst:-false}, 00:33:42.410 "ddgst": ${ddgst:-false} 00:33:42.410 }, 00:33:42.410 "method": "bdev_nvme_attach_controller" 00:33:42.410 } 00:33:42.410 EOF 00:33:42.410 )") 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:42.410 "params": { 00:33:42.410 "name": "Nvme0", 00:33:42.410 "trtype": "tcp", 00:33:42.410 "traddr": "10.0.0.2", 00:33:42.410 "adrfam": "ipv4", 00:33:42.410 "trsvcid": "4420", 00:33:42.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:42.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:42.410 "hdgst": false, 00:33:42.410 "ddgst": false 00:33:42.410 }, 00:33:42.410 "method": "bdev_nvme_attach_controller" 00:33:42.410 },{ 00:33:42.410 "params": { 00:33:42.410 "name": "Nvme1", 00:33:42.410 "trtype": "tcp", 00:33:42.410 "traddr": "10.0.0.2", 00:33:42.410 "adrfam": "ipv4", 00:33:42.410 "trsvcid": "4420", 00:33:42.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:42.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:42.410 "hdgst": false, 00:33:42.410 "ddgst": false 00:33:42.410 }, 00:33:42.410 "method": "bdev_nvme_attach_controller" 00:33:42.410 }' 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:42.410 10:47:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.410 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:42.410 ... 00:33:42.410 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:42.410 ... 00:33:42.410 fio-3.35 00:33:42.410 Starting 4 threads 00:33:47.686 00:33:47.686 filename0: (groupid=0, jobs=1): err= 0: pid=1775500: Thu Dec 12 10:47:21 2024 00:33:47.686 read: IOPS=2763, BW=21.6MiB/s (22.6MB/s)(108MiB/5001msec) 00:33:47.686 slat (nsec): min=6196, max=37406, avg=9307.83, stdev=3482.62 00:33:47.686 clat (usec): min=787, max=5854, avg=2865.67, stdev=411.41 00:33:47.686 lat (usec): min=804, max=5866, avg=2874.98, stdev=411.22 00:33:47.686 clat percentiles (usec): 00:33:47.686 | 1.00th=[ 1680], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2573], 00:33:47.686 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2966], 60.00th=[ 2966], 00:33:47.686 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 3458], 00:33:47.686 | 99.00th=[ 4015], 99.50th=[ 4293], 99.90th=[ 5014], 99.95th=[ 5145], 00:33:47.686 | 99.99th=[ 5866] 00:33:47.686 bw ( KiB/s): min=21328, max=23728, per=26.40%, avg=22204.89, stdev=785.83, samples=9 00:33:47.686 iops : min= 2666, max= 2966, avg=2775.56, stdev=98.27, samples=9 00:33:47.686 lat (usec) : 1000=0.16% 00:33:47.686 lat (msec) : 2=1.89%, 4=96.92%, 10=1.03% 00:33:47.686 cpu : usr=96.08%, sys=3.58%, ctx=5, majf=0, minf=9 00:33:47.686 IO depths : 1=0.5%, 2=6.9%, 4=66.0%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.686 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.686 issued rwts: total=13818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.686 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:47.686 filename0: (groupid=0, jobs=1): err= 0: pid=1775501: Thu Dec 12 10:47:21 2024 00:33:47.686 read: IOPS=2573, BW=20.1MiB/s (21.1MB/s)(101MiB/5001msec) 00:33:47.686 slat (nsec): min=6183, max=37803, avg=9041.37, stdev=3199.54 00:33:47.686 clat (usec): min=728, max=5601, avg=3081.94, stdev=446.45 00:33:47.686 lat (usec): min=740, max=5608, avg=3090.98, stdev=446.19 00:33:47.686 clat percentiles (usec): 00:33:47.686 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2868], 00:33:47.686 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:33:47.686 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3589], 95.00th=[ 3982], 00:33:47.686 | 99.00th=[ 4686], 99.50th=[ 5080], 99.90th=[ 5276], 99.95th=[ 5473], 00:33:47.686 | 99.99th=[ 5604] 00:33:47.686 bw ( KiB/s): min=19632, max=21280, per=24.49%, avg=20598.33, stdev=528.50, samples=9 00:33:47.686 iops : min= 2454, max= 2660, avg=2574.78, stdev=66.06, samples=9 00:33:47.686 lat (usec) : 750=0.01%, 1000=0.02% 00:33:47.686 lat (msec) : 2=0.56%, 4=94.55%, 10=4.86% 00:33:47.686 cpu : usr=95.84%, sys=3.86%, ctx=8, majf=0, minf=9 00:33:47.686 IO depths : 1=0.2%, 2=3.1%, 4=69.3%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.686 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.686 issued rwts: total=12871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.686 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:47.686 filename1: (groupid=0, jobs=1): err= 0: pid=1775502: Thu Dec 12 10:47:21 2024 00:33:47.686 read: IOPS=2694, BW=21.1MiB/s (22.1MB/s)(106MiB/5042msec) 00:33:47.686 slat (nsec): min=6178, max=57265, avg=9333.74, stdev=3301.24 00:33:47.686 clat (usec): min=771, max=43006, avg=2934.52, stdev=944.45 00:33:47.686 lat (usec): min=781, max=43013, avg=2943.85, stdev=944.43 00:33:47.686 clat percentiles (usec): 00:33:47.686 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2606], 00:33:47.686 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2966], 00:33:47.686 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3392], 95.00th=[ 3720], 00:33:47.686 | 99.00th=[ 4359], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5407], 00:33:47.686 | 99.99th=[43254] 00:33:47.686 bw ( KiB/s): min=20736, max=22512, per=25.79%, avg=21692.44, stdev=497.21, samples=9 00:33:47.686 iops : min= 2592, max= 2814, avg=2711.56, stdev=62.15, samples=9 00:33:47.686 lat (usec) : 1000=0.01% 00:33:47.686 lat (msec) : 2=1.32%, 4=96.15%, 10=2.48%, 50=0.04% 00:33:47.686 cpu : usr=95.70%, sys=3.97%, ctx=18, majf=0, minf=9 00:33:47.686 IO depths : 1=0.5%, 2=5.9%, 4=64.7%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.686 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.686 issued rwts: total=13587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.686 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:47.686 filename1: (groupid=0, jobs=1): err= 0: pid=1775503: Thu Dec 12 10:47:21 2024 00:33:47.686 read: IOPS=2545, BW=19.9MiB/s (20.9MB/s)(99.5MiB/5001msec) 00:33:47.686 slat (nsec): min=6174, max=49711, avg=8898.03, stdev=3333.04 00:33:47.686 clat (usec): min=986, max=5851, avg=3115.45, stdev=449.00 00:33:47.686 lat (usec): min=996, max=5865, avg=3124.34, stdev=448.80 00:33:47.686 clat percentiles (usec): 00:33:47.686 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2737], 20.00th=[ 2900], 00:33:47.686 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:33:47.686 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3654], 95.00th=[ 4015], 00:33:47.686 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5473], 99.95th=[ 5604], 00:33:47.686 | 99.99th=[ 5866] 00:33:47.687 bw ( KiB/s): min=19696, max=21008, per=24.19%, avg=20341.33, stdev=474.30, samples=9 00:33:47.687 iops : min= 2462, max= 2626, avg=2542.67, stdev=59.29, samples=9 00:33:47.687 lat (usec) : 1000=0.01% 00:33:47.687 lat (msec) : 2=0.29%, 4=94.61%, 10=5.09% 00:33:47.687 cpu : usr=95.92%, sys=3.76%, ctx=7, majf=0, minf=9 00:33:47.687 IO depths : 1=0.3%, 2=2.7%, 4=70.5%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.687 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.687 issued rwts: total=12731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:47.687 00:33:47.687 Run status group 0 (all jobs): 00:33:47.687 READ: bw=82.1MiB/s (86.1MB/s), 19.9MiB/s-21.6MiB/s (20.9MB/s-22.6MB/s), io=414MiB (434MB), run=5001-5042msec 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.687 00:33:47.687 real 0m24.395s 00:33:47.687 user 4m51.943s 00:33:47.687 sys 0m5.360s 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 ************************************ 00:33:47.687 END TEST fio_dif_rand_params 00:33:47.687 ************************************ 00:33:47.687 10:47:21 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:47.687 10:47:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:47.687 10:47:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 ************************************ 00:33:47.687 START TEST fio_dif_digest 00:33:47.687 ************************************ 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 bdev_null0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:47.687 [2024-12-12 10:47:21.603754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:47.687 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:47.688 { 00:33:47.688 "params": { 00:33:47.688 "name": "Nvme$subsystem", 00:33:47.688 "trtype": "$TEST_TRANSPORT", 00:33:47.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.688 "adrfam": "ipv4", 00:33:47.688 "trsvcid": "$NVMF_PORT", 00:33:47.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.688 "hdgst": ${hdgst:-false}, 00:33:47.688 "ddgst": ${ddgst:-false} 00:33:47.688 }, 00:33:47.688 "method": "bdev_nvme_attach_controller" 00:33:47.688 } 00:33:47.688 EOF 00:33:47.688 )") 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:47.688 "params": { 00:33:47.688 "name": "Nvme0", 00:33:47.688 "trtype": "tcp", 00:33:47.688 "traddr": "10.0.0.2", 00:33:47.688 "adrfam": "ipv4", 00:33:47.688 "trsvcid": "4420", 00:33:47.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:47.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:47.688 "hdgst": true, 00:33:47.688 "ddgst": true 00:33:47.688 }, 00:33:47.688 "method": "bdev_nvme_attach_controller" 00:33:47.688 }' 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:47.688 10:47:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.255 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:48.255 ... 00:33:48.255 fio-3.35 00:33:48.255 Starting 3 threads 00:34:00.466 00:34:00.466 filename0: (groupid=0, jobs=1): err= 0: pid=1776547: Thu Dec 12 10:47:32 2024 00:34:00.466 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10048msec) 00:34:00.466 slat (nsec): min=6417, max=30939, avg=11450.21, stdev=2023.92 00:34:00.466 clat (usec): min=6168, max=52096, avg=10304.90, stdev=1302.44 00:34:00.466 lat (usec): min=6180, max=52108, avg=10316.35, stdev=1302.42 00:34:00.466 clat percentiles (usec): 00:34:00.466 | 1.00th=[ 8291], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:34:00.466 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:34:00.466 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11338], 00:34:00.466 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12649], 99.95th=[50594], 00:34:00.466 | 99.99th=[52167] 00:34:00.466 bw ( KiB/s): min=36352, max=38656, per=35.46%, avg=37312.00, stdev=476.22, samples=20 00:34:00.466 iops : min= 284, max= 302, avg=291.50, stdev= 3.72, samples=20 00:34:00.466 lat (msec) : 10=32.94%, 20=66.99%, 100=0.07% 00:34:00.466 cpu : usr=94.00%, sys=5.61%, ctx=17, majf=0, minf=75 00:34:00.466 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.466 issued rwts: total=2917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.466 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:00.466 filename0: (groupid=0, jobs=1): err= 0: pid=1776548: Thu Dec 12 10:47:32 2024 00:34:00.466 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(333MiB/10044msec) 00:34:00.466 slat (nsec): min=6420, max=24350, avg=11715.49, stdev=1634.47 00:34:00.466 clat (usec): min=6808, max=48025, avg=11298.83, stdev=1260.05 00:34:00.466 lat (usec): min=6821, max=48033, avg=11310.55, stdev=1260.03 00:34:00.466 clat percentiles (usec): 00:34:00.466 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:34:00.466 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:34:00.466 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:34:00.466 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14222], 99.95th=[44827], 00:34:00.466 | 99.99th=[47973] 00:34:00.466 bw ( KiB/s): min=32768, max=34560, per=32.33%, avg=34022.40, stdev=454.17, samples=20 00:34:00.466 iops : min= 256, max= 270, avg=265.80, stdev= 3.55, samples=20 00:34:00.466 lat (msec) : 10=4.62%, 20=95.30%, 50=0.08% 00:34:00.466 cpu : usr=94.77%, sys=4.92%, ctx=16, majf=0, minf=61 00:34:00.466 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.466 issued rwts: total=2660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.466 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:00.466 filename0: (groupid=0, jobs=1): err= 0: pid=1776549: Thu Dec 12 10:47:32 2024 00:34:00.466 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(335MiB/10044msec) 00:34:00.466 slat (nsec): min=6395, max=28733, avg=11202.64, stdev=2125.09 00:34:00.466 clat (usec): min=8710, max=52923, avg=11201.92, stdev=1877.19 00:34:00.466 lat (usec): min=8723, max=52946, avg=11213.12, stdev=1877.31 00:34:00.466 clat percentiles (usec): 00:34:00.466 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:34:00.466 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:34:00.466 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:34:00.467 | 99.00th=[13042], 99.50th=[13173], 99.90th=[52691], 99.95th=[52691], 00:34:00.467 | 99.99th=[52691] 00:34:00.467 bw ( KiB/s): min=31232, max=35328, per=32.61%, avg=34316.80, stdev=836.68, samples=20 00:34:00.467 iops : min= 244, max= 276, avg=268.10, stdev= 6.54, samples=20 00:34:00.467 lat (msec) : 10=5.85%, 20=93.96%, 50=0.04%, 100=0.15% 00:34:00.467 cpu : usr=94.79%, sys=4.88%, ctx=64, majf=0, minf=110 00:34:00.467 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.467 issued rwts: total=2683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.467 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:00.467 00:34:00.467 Run status group 0 (all jobs): 00:34:00.467 READ: bw=103MiB/s (108MB/s), 33.1MiB/s-36.3MiB/s (34.7MB/s-38.1MB/s), io=1033MiB (1083MB), run=10044-10048msec 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.467 00:34:00.467 real 0m11.090s 00:34:00.467 user 0m34.968s 00:34:00.467 sys 0m1.828s 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:00.467 10:47:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:00.467 ************************************ 00:34:00.467 END TEST fio_dif_digest 00:34:00.467 ************************************ 00:34:00.467 10:47:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:00.467 10:47:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:00.467 rmmod nvme_tcp 00:34:00.467 rmmod nvme_fabrics 00:34:00.467 rmmod nvme_keyring 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1768173 ']' 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1768173 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1768173 ']' 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1768173 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768173 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768173' 00:34:00.467 killing process with pid 1768173 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1768173 00:34:00.467 10:47:32 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1768173 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:00.467 10:47:32 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:01.846 Waiting for block devices as requested 00:34:01.846 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:01.846 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:01.846 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:02.105 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:02.105 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:02.105 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:02.364 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:02.364 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:02.364 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:02.623 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:02.623 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:02.623 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:02.623 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:02.882 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:02.882 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:02.882 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:02.882 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:03.142 10:47:36 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:03.142 10:47:36 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:03.142 10:47:36 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:03.142 10:47:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:03.142 10:47:36 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:03.142 10:47:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:03.142 10:47:36 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.142 10:47:36 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.142 10:47:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.142 10:47:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:03.142 10:47:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.048 10:47:39 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.048 00:34:05.048 real 1m14.137s 00:34:05.048 user 7m9.750s 00:34:05.048 sys 0m20.858s 00:34:05.048 10:47:39 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.048 10:47:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:05.048 ************************************ 00:34:05.048 END TEST nvmf_dif 00:34:05.048 ************************************ 00:34:05.048 10:47:39 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:05.048 10:47:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:05.048 10:47:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:05.048 10:47:39 -- common/autotest_common.sh@10 -- # set +x 00:34:05.307 ************************************ 00:34:05.307 START TEST nvmf_abort_qd_sizes 00:34:05.307 ************************************ 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:05.307 * Looking for test storage... 00:34:05.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.307 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:05.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.308 --rc genhtml_branch_coverage=1 00:34:05.308 --rc genhtml_function_coverage=1 00:34:05.308 --rc genhtml_legend=1 00:34:05.308 --rc geninfo_all_blocks=1 00:34:05.308 --rc geninfo_unexecuted_blocks=1 00:34:05.308 00:34:05.308 ' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:05.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.308 --rc genhtml_branch_coverage=1 00:34:05.308 --rc genhtml_function_coverage=1 00:34:05.308 --rc genhtml_legend=1 00:34:05.308 --rc geninfo_all_blocks=1 00:34:05.308 --rc geninfo_unexecuted_blocks=1 00:34:05.308 00:34:05.308 ' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:05.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.308 --rc genhtml_branch_coverage=1 00:34:05.308 --rc genhtml_function_coverage=1 00:34:05.308 --rc genhtml_legend=1 00:34:05.308 --rc geninfo_all_blocks=1 00:34:05.308 --rc geninfo_unexecuted_blocks=1 00:34:05.308 00:34:05.308 ' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:05.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.308 --rc genhtml_branch_coverage=1 00:34:05.308 --rc genhtml_function_coverage=1 00:34:05.308 --rc genhtml_legend=1 00:34:05.308 --rc geninfo_all_blocks=1 00:34:05.308 --rc geninfo_unexecuted_blocks=1 00:34:05.308 00:34:05.308 ' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:05.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:05.308 10:47:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:11.879 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:11.879 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:11.879 Found net devices under 0000:af:00.0: cvl_0_0 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:11.879 Found net devices under 0000:af:00.1: cvl_0_1 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.879 10:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.879 10:47:45 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.879 10:47:45 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.879 10:47:45 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:34:11.879 00:34:11.879 --- 10.0.0.2 ping statistics --- 00:34:11.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.879 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:34:11.879 10:47:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:34:11.879 00:34:11.879 --- 10.0.0.1 ping statistics --- 00:34:11.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.880 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:34:11.880 10:47:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.880 10:47:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:11.880 10:47:45 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:11.880 10:47:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:13.786 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:13.786 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:13.786 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:13.786 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:13.786 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:13.786 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:14.045 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:14.982 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1784411 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1784411 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1784411 ']' 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.982 10:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:14.982 [2024-12-12 10:47:48.969857] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:34:14.982 [2024-12-12 10:47:48.969900] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.240 [2024-12-12 10:47:49.049274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:15.240 [2024-12-12 10:47:49.091643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.240 [2024-12-12 10:47:49.091680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.240 [2024-12-12 10:47:49.091687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.240 [2024-12-12 10:47:49.091693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.240 [2024-12-12 10:47:49.091698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.240 [2024-12-12 10:47:49.093161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.241 [2024-12-12 10:47:49.093274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.241 [2024-12-12 10:47:49.093378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.241 [2024-12-12 10:47:49.093379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:15.241 10:47:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:15.499 ************************************ 00:34:15.499 START TEST spdk_target_abort 00:34:15.499 ************************************ 00:34:15.499 10:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:15.499 10:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:15.499 10:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:15.499 10:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.499 10:47:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:18.783 spdk_targetn1 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:18.783 [2024-12-12 10:47:52.101463] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:18.783 [2024-12-12 10:47:52.145758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:18.783 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:18.784 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:18.784 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:18.784 10:47:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:21.307 Initializing NVMe Controllers 00:34:21.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:21.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:21.307 Initialization complete. Launching workers. 00:34:21.307 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17332, failed: 0 00:34:21.307 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1424, failed to submit 15908 00:34:21.307 success 703, unsuccessful 721, failed 0 00:34:21.307 10:47:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:21.307 10:47:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:24.586 Initializing NVMe Controllers 00:34:24.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:24.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:24.586 Initialization complete. Launching workers. 00:34:24.586 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8564, failed: 0 00:34:24.586 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7337 00:34:24.586 success 329, unsuccessful 898, failed 0 00:34:24.586 10:47:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:24.586 10:47:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:27.866 Initializing NVMe Controllers 00:34:27.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:27.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:27.866 Initialization complete. Launching workers. 00:34:27.866 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38667, failed: 0 00:34:27.866 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2992, failed to submit 35675 00:34:27.866 success 598, unsuccessful 2394, failed 0 00:34:27.866 10:48:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:27.866 10:48:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.866 10:48:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:27.866 10:48:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.866 10:48:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:27.866 10:48:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.866 10:48:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1784411 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1784411 ']' 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1784411 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1784411 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1784411' 00:34:29.238 killing process with pid 1784411 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1784411 00:34:29.238 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1784411 00:34:29.497 00:34:29.497 real 0m14.005s 00:34:29.497 user 0m53.283s 00:34:29.497 sys 0m2.665s 00:34:29.497 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.497 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.497 ************************************ 00:34:29.497 END TEST spdk_target_abort 00:34:29.497 ************************************ 00:34:29.497 10:48:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:29.497 10:48:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:29.497 10:48:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.497 10:48:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:29.497 ************************************ 00:34:29.497 START TEST kernel_target_abort 00:34:29.497 ************************************ 00:34:29.497 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:29.498 10:48:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:32.031 Waiting for block devices as requested 00:34:32.031 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:32.290 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:32.290 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:32.549 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:32.549 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:32.549 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:32.549 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:32.808 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:32.808 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:32.808 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:33.067 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:33.067 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:33.067 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:33.325 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:33.325 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:33.325 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:33.325 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:33.584 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:33.584 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:33.585 No valid GPT data, bailing 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:33.585 00:34:33.585 Discovery Log Number of Records 2, Generation counter 2 00:34:33.585 =====Discovery Log Entry 0====== 00:34:33.585 trtype: tcp 00:34:33.585 adrfam: ipv4 00:34:33.585 subtype: current discovery subsystem 00:34:33.585 treq: not specified, sq flow control disable supported 00:34:33.585 portid: 1 00:34:33.585 trsvcid: 4420 00:34:33.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:33.585 traddr: 10.0.0.1 00:34:33.585 eflags: none 00:34:33.585 sectype: none 00:34:33.585 =====Discovery Log Entry 1====== 00:34:33.585 trtype: tcp 00:34:33.585 adrfam: ipv4 00:34:33.585 subtype: nvme subsystem 00:34:33.585 treq: not specified, sq flow control disable supported 00:34:33.585 portid: 1 00:34:33.585 trsvcid: 4420 00:34:33.585 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:33.585 traddr: 10.0.0.1 00:34:33.585 eflags: none 00:34:33.585 sectype: none 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:33.585 10:48:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.865 Initializing NVMe Controllers 00:34:36.865 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.865 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:36.865 Initialization complete. Launching workers. 00:34:36.865 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93373, failed: 0 00:34:36.865 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93373, failed to submit 0 00:34:36.865 success 0, unsuccessful 93373, failed 0 00:34:36.865 10:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.865 10:48:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:40.146 Initializing NVMe Controllers 00:34:40.146 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:40.146 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:40.146 Initialization complete. Launching workers. 00:34:40.146 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150570, failed: 0 00:34:40.146 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37782, failed to submit 112788 00:34:40.146 success 0, unsuccessful 37782, failed 0 00:34:40.146 10:48:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:40.146 10:48:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:43.426 Initializing NVMe Controllers 00:34:43.426 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:43.426 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:43.426 Initialization complete. Launching workers. 00:34:43.426 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141346, failed: 0 00:34:43.426 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35402, failed to submit 105944 00:34:43.426 success 0, unsuccessful 35402, failed 0 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:43.426 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:45.961 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:45.961 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:46.897 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:46.897 00:34:46.897 real 0m17.424s 00:34:46.897 user 0m9.152s 00:34:46.897 sys 0m4.931s 00:34:46.897 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.897 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:46.897 ************************************ 00:34:46.897 END TEST kernel_target_abort 00:34:46.897 ************************************ 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:46.897 rmmod nvme_tcp 00:34:46.897 rmmod nvme_fabrics 00:34:46.897 rmmod nvme_keyring 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1784411 ']' 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1784411 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1784411 ']' 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1784411 00:34:46.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1784411) - No such process 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1784411 is not found' 00:34:46.897 Process with pid 1784411 is not found 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:46.897 10:48:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:49.527 Waiting for block devices as requested 00:34:49.785 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:49.785 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:49.785 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:50.044 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:50.044 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:50.044 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:50.303 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:50.303 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:50.303 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:50.303 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:50.562 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:50.562 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:50.562 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:50.821 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:50.821 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:50.821 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:51.080 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:51.080 10:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.995 10:48:27 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.255 00:34:53.255 real 0m47.925s 00:34:53.255 user 1m6.649s 00:34:53.255 sys 0m16.285s 00:34:53.255 10:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.255 10:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:53.255 ************************************ 00:34:53.255 END TEST nvmf_abort_qd_sizes 00:34:53.255 ************************************ 00:34:53.255 10:48:27 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:53.255 10:48:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:53.255 10:48:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.255 10:48:27 -- common/autotest_common.sh@10 -- # set +x 00:34:53.255 ************************************ 00:34:53.255 START TEST keyring_file 00:34:53.255 ************************************ 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:53.255 * Looking for test storage... 00:34:53.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.255 10:48:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:53.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.255 --rc genhtml_branch_coverage=1 00:34:53.255 --rc genhtml_function_coverage=1 00:34:53.255 --rc genhtml_legend=1 00:34:53.255 --rc geninfo_all_blocks=1 00:34:53.255 --rc geninfo_unexecuted_blocks=1 00:34:53.255 00:34:53.255 ' 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:53.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.255 --rc genhtml_branch_coverage=1 00:34:53.255 --rc genhtml_function_coverage=1 00:34:53.255 --rc genhtml_legend=1 00:34:53.255 --rc geninfo_all_blocks=1 00:34:53.255 --rc geninfo_unexecuted_blocks=1 00:34:53.255 00:34:53.255 ' 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:53.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.255 --rc genhtml_branch_coverage=1 00:34:53.255 --rc genhtml_function_coverage=1 00:34:53.255 --rc genhtml_legend=1 00:34:53.255 --rc geninfo_all_blocks=1 00:34:53.255 --rc geninfo_unexecuted_blocks=1 00:34:53.255 00:34:53.255 ' 00:34:53.255 10:48:27 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:53.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.255 --rc genhtml_branch_coverage=1 00:34:53.255 --rc genhtml_function_coverage=1 00:34:53.255 --rc genhtml_legend=1 00:34:53.255 --rc geninfo_all_blocks=1 00:34:53.255 --rc geninfo_unexecuted_blocks=1 00:34:53.255 00:34:53.255 ' 00:34:53.255 10:48:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:53.255 10:48:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.255 10:48:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.515 10:48:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.515 10:48:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.515 10:48:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.515 10:48:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.515 10:48:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.515 10:48:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.515 10:48:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.515 10:48:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:53.515 10:48:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.515 10:48:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:53.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tySob5BS5f 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tySob5BS5f 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tySob5BS5f 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tySob5BS5f 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8KBN5c7248 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:53.516 10:48:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8KBN5c7248 00:34:53.516 10:48:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8KBN5c7248 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.8KBN5c7248 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=1793515 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:53.516 10:48:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1793515 00:34:53.516 10:48:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1793515 ']' 00:34:53.516 10:48:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.516 10:48:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.516 10:48:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.516 10:48:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.516 10:48:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:53.516 [2024-12-12 10:48:27.464143] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:34:53.516 [2024-12-12 10:48:27.464192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1793515 ] 00:34:53.775 [2024-12-12 10:48:27.540440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.775 [2024-12-12 10:48:27.581769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.775 10:48:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.775 10:48:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:53.775 10:48:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:53.775 10:48:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.775 10:48:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:53.775 [2024-12-12 10:48:27.797721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.033 null0 00:34:54.033 [2024-12-12 10:48:27.829765] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:54.033 [2024-12-12 10:48:27.830051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.033 10:48:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:54.033 [2024-12-12 10:48:27.857827] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:54.033 request: 00:34:54.033 { 00:34:54.033 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.033 "secure_channel": false, 00:34:54.033 "listen_address": { 00:34:54.033 "trtype": "tcp", 00:34:54.033 "traddr": "127.0.0.1", 00:34:54.033 "trsvcid": "4420" 00:34:54.033 }, 00:34:54.033 "method": "nvmf_subsystem_add_listener", 00:34:54.033 "req_id": 1 00:34:54.033 } 00:34:54.033 Got JSON-RPC error response 00:34:54.033 response: 00:34:54.033 { 00:34:54.033 "code": -32602, 00:34:54.033 "message": "Invalid parameters" 00:34:54.033 } 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:54.033 10:48:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=1793526 00:34:54.033 10:48:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1793526 /var/tmp/bperf.sock 00:34:54.033 10:48:27 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1793526 ']' 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:54.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.033 10:48:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:54.033 [2024-12-12 10:48:27.911415] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:34:54.033 [2024-12-12 10:48:27.911458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1793526 ] 00:34:54.033 [2024-12-12 10:48:27.985561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.033 [2024-12-12 10:48:28.025808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.291 10:48:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.291 10:48:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:54.291 10:48:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tySob5BS5f 00:34:54.291 10:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tySob5BS5f 00:34:54.291 10:48:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8KBN5c7248 00:34:54.291 10:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8KBN5c7248 00:34:54.550 10:48:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:54.550 10:48:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:54.550 10:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:54.550 10:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:54.550 10:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:54.807 10:48:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.tySob5BS5f == \/\t\m\p\/\t\m\p\.\t\y\S\o\b\5\B\S\5\f ]] 00:34:54.807 10:48:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:54.807 10:48:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:54.807 10:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:54.807 10:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:54.807 10:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:55.066 10:48:28 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.8KBN5c7248 == \/\t\m\p\/\t\m\p\.\8\K\B\N\5\c\7\2\4\8 ]] 00:34:55.066 10:48:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:55.066 10:48:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:55.066 10:48:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:55.066 10:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:55.066 10:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:55.066 10:48:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:55.066 10:48:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:55.066 10:48:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:55.066 10:48:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:55.066 10:48:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:55.066 10:48:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:55.066 10:48:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:55.066 10:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:55.324 10:48:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:55.325 10:48:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:55.325 10:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:55.583 [2024-12-12 10:48:29.423836] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:55.583 nvme0n1 00:34:55.583 10:48:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:55.583 10:48:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:55.583 10:48:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:55.583 10:48:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:55.583 10:48:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:55.583 10:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:55.842 10:48:29 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:55.842 10:48:29 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:55.842 10:48:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:55.842 10:48:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:55.842 10:48:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:55.842 10:48:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:55.842 10:48:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:56.101 10:48:29 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:56.101 10:48:29 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:56.101 Running I/O for 1 seconds... 00:34:57.036 19220.00 IOPS, 75.08 MiB/s 00:34:57.036 Latency(us) 00:34:57.036 [2024-12-12T09:48:31.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.036 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:57.036 nvme0n1 : 1.00 19271.39 75.28 0.00 0.00 6630.44 2715.06 13294.45 00:34:57.036 [2024-12-12T09:48:31.059Z] =================================================================================================================== 00:34:57.036 [2024-12-12T09:48:31.059Z] Total : 19271.39 75.28 0.00 0.00 6630.44 2715.06 13294.45 00:34:57.036 { 00:34:57.036 "results": [ 00:34:57.036 { 00:34:57.036 "job": "nvme0n1", 00:34:57.036 "core_mask": "0x2", 00:34:57.036 "workload": "randrw", 00:34:57.036 "percentage": 50, 00:34:57.036 "status": "finished", 00:34:57.036 "queue_depth": 128, 00:34:57.036 "io_size": 4096, 00:34:57.036 "runtime": 1.004079, 00:34:57.036 "iops": 19271.39199206437, 00:34:57.036 "mibps": 75.27887496900145, 00:34:57.036 "io_failed": 0, 00:34:57.036 "io_timeout": 0, 00:34:57.036 "avg_latency_us": 6630.442512316968, 00:34:57.036 "min_latency_us": 2715.062857142857, 00:34:57.036 "max_latency_us": 13294.445714285714 00:34:57.036 } 00:34:57.036 ], 00:34:57.036 "core_count": 1 00:34:57.036 } 00:34:57.036 10:48:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:57.036 10:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:57.295 10:48:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:57.295 10:48:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:57.295 10:48:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:57.295 10:48:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.295 10:48:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:57.295 10:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.554 10:48:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:57.554 10:48:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:57.554 10:48:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:57.554 10:48:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:57.554 10:48:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:57.554 10:48:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.554 10:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.813 10:48:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:57.813 10:48:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:57.813 10:48:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:57.813 10:48:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:57.813 10:48:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:57.813 10:48:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:57.813 10:48:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:57.813 10:48:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:57.813 10:48:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:57.813 10:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:57.813 [2024-12-12 10:48:31.811454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:57.813 [2024-12-12 10:48:31.811502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1415470 (107): Transport endpoint is not connected 00:34:57.813 [2024-12-12 10:48:31.812496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1415470 (9): Bad file descriptor 00:34:57.813 [2024-12-12 10:48:31.813498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:57.813 [2024-12-12 10:48:31.813508] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:57.813 [2024-12-12 10:48:31.813516] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:57.813 [2024-12-12 10:48:31.813523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:57.813 request: 00:34:57.813 { 00:34:57.813 "name": "nvme0", 00:34:57.813 "trtype": "tcp", 00:34:57.813 "traddr": "127.0.0.1", 00:34:57.813 "adrfam": "ipv4", 00:34:57.813 "trsvcid": "4420", 00:34:57.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:57.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:57.813 "prchk_reftag": false, 00:34:57.813 "prchk_guard": false, 00:34:57.813 "hdgst": false, 00:34:57.813 "ddgst": false, 00:34:57.813 "psk": "key1", 00:34:57.813 "allow_unrecognized_csi": false, 00:34:57.813 "method": "bdev_nvme_attach_controller", 00:34:57.813 "req_id": 1 00:34:57.813 } 00:34:57.813 Got JSON-RPC error response 00:34:57.813 response: 00:34:57.813 { 00:34:57.813 "code": -5, 00:34:57.813 "message": "Input/output error" 00:34:57.813 } 00:34:58.071 10:48:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:58.072 10:48:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:58.072 10:48:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:58.072 10:48:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:58.072 10:48:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:58.072 10:48:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:58.072 10:48:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.072 10:48:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.072 10:48:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:58.072 10:48:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.072 10:48:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:58.072 10:48:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:58.072 10:48:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:58.072 10:48:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.072 10:48:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.072 10:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.072 10:48:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:58.330 10:48:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:58.330 10:48:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:58.330 10:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:58.589 10:48:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:58.589 10:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:58.589 10:48:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:58.589 10:48:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:58.589 10:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.847 10:48:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:58.847 10:48:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.tySob5BS5f 00:34:58.848 10:48:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tySob5BS5f 00:34:58.848 10:48:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:58.848 10:48:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tySob5BS5f 00:34:58.848 10:48:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:58.848 10:48:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.848 10:48:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:58.848 10:48:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.848 10:48:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tySob5BS5f 00:34:58.848 10:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tySob5BS5f 00:34:59.106 [2024-12-12 10:48:32.962652] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tySob5BS5f': 0100660 00:34:59.106 [2024-12-12 10:48:32.962675] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:59.106 request: 00:34:59.106 { 00:34:59.106 "name": "key0", 00:34:59.106 "path": "/tmp/tmp.tySob5BS5f", 00:34:59.106 "method": "keyring_file_add_key", 00:34:59.106 "req_id": 1 00:34:59.106 } 00:34:59.106 Got JSON-RPC error response 00:34:59.106 response: 00:34:59.106 { 00:34:59.106 "code": -1, 00:34:59.107 "message": "Operation not permitted" 00:34:59.107 } 00:34:59.107 10:48:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:59.107 10:48:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:59.107 10:48:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:59.107 10:48:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:59.107 10:48:32 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.tySob5BS5f 00:34:59.107 10:48:32 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tySob5BS5f 00:34:59.107 10:48:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tySob5BS5f 00:34:59.366 10:48:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.tySob5BS5f 00:34:59.366 10:48:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:59.366 10:48:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:59.366 10:48:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:59.366 10:48:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:59.366 10:48:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:59.366 10:48:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:59.366 10:48:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:59.366 10:48:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:59.366 10:48:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:59.366 10:48:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:59.366 10:48:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:59.366 10:48:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.366 10:48:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:59.366 10:48:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.366 10:48:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:59.366 10:48:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:59.625 [2024-12-12 10:48:33.536174] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tySob5BS5f': No such file or directory 00:34:59.625 [2024-12-12 10:48:33.536197] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:59.625 [2024-12-12 10:48:33.536212] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:59.625 [2024-12-12 10:48:33.536219] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:59.625 [2024-12-12 10:48:33.536227] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:59.625 [2024-12-12 10:48:33.536233] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:59.625 request: 00:34:59.625 { 00:34:59.625 "name": "nvme0", 00:34:59.625 "trtype": "tcp", 00:34:59.625 "traddr": "127.0.0.1", 00:34:59.625 "adrfam": "ipv4", 00:34:59.625 "trsvcid": "4420", 00:34:59.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.625 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:59.625 "prchk_reftag": false, 00:34:59.625 "prchk_guard": false, 00:34:59.625 "hdgst": false, 00:34:59.625 "ddgst": false, 00:34:59.625 "psk": "key0", 00:34:59.625 "allow_unrecognized_csi": false, 00:34:59.625 "method": "bdev_nvme_attach_controller", 00:34:59.625 "req_id": 1 00:34:59.625 } 00:34:59.625 Got JSON-RPC error response 00:34:59.625 response: 00:34:59.625 { 00:34:59.625 "code": -19, 00:34:59.625 "message": "No such device" 00:34:59.625 } 00:34:59.625 10:48:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:59.625 10:48:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:59.625 10:48:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:59.625 10:48:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:59.625 10:48:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:59.625 10:48:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:59.884 10:48:33 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:59.884 10:48:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:59.884 10:48:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:59.884 10:48:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:59.884 10:48:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:59.884 10:48:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:59.884 10:48:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yXWSBGZvUN 00:34:59.884 10:48:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:59.884 10:48:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:59.884 10:48:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:59.884 10:48:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:59.884 10:48:33 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:59.884 10:48:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:59.885 10:48:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:59.885 10:48:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yXWSBGZvUN 00:34:59.885 10:48:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yXWSBGZvUN 00:34:59.885 10:48:33 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.yXWSBGZvUN 00:34:59.885 10:48:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yXWSBGZvUN 00:34:59.885 10:48:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yXWSBGZvUN 00:35:00.143 10:48:33 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:00.143 10:48:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:00.402 nvme0n1 00:35:00.402 10:48:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:00.402 10:48:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.402 10:48:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:00.402 10:48:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.402 10:48:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:00.402 10:48:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.660 10:48:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:00.660 10:48:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:00.660 10:48:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:00.660 10:48:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:00.660 10:48:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:00.660 10:48:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.660 10:48:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:00.660 10:48:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.918 10:48:34 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:00.918 10:48:34 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:00.918 10:48:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:00.918 10:48:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.918 10:48:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.918 10:48:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.918 10:48:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:01.177 10:48:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:01.177 10:48:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:01.177 10:48:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:01.435 10:48:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:01.435 10:48:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.435 10:48:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:01.435 10:48:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:01.435 10:48:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yXWSBGZvUN 00:35:01.435 10:48:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yXWSBGZvUN 00:35:01.694 10:48:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8KBN5c7248 00:35:01.694 10:48:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8KBN5c7248 00:35:01.952 10:48:35 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:01.952 10:48:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.211 nvme0n1 00:35:02.211 10:48:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:02.211 10:48:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:02.470 10:48:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:02.470 "subsystems": [ 00:35:02.470 { 00:35:02.470 "subsystem": "keyring", 00:35:02.470 "config": [ 00:35:02.470 { 00:35:02.470 "method": "keyring_file_add_key", 00:35:02.470 "params": { 00:35:02.470 "name": "key0", 00:35:02.470 "path": "/tmp/tmp.yXWSBGZvUN" 00:35:02.470 } 00:35:02.470 }, 00:35:02.470 { 00:35:02.470 "method": "keyring_file_add_key", 00:35:02.470 "params": { 00:35:02.470 "name": "key1", 00:35:02.470 "path": "/tmp/tmp.8KBN5c7248" 00:35:02.470 } 00:35:02.470 } 00:35:02.470 ] 00:35:02.470 }, 00:35:02.470 { 00:35:02.470 "subsystem": "iobuf", 00:35:02.470 "config": [ 00:35:02.470 { 00:35:02.470 "method": "iobuf_set_options", 00:35:02.470 "params": { 00:35:02.470 "small_pool_count": 8192, 00:35:02.470 "large_pool_count": 1024, 00:35:02.470 "small_bufsize": 8192, 00:35:02.470 "large_bufsize": 135168, 00:35:02.470 "enable_numa": false 00:35:02.470 } 00:35:02.470 } 00:35:02.470 ] 00:35:02.470 }, 00:35:02.470 { 00:35:02.470 "subsystem": "sock", 00:35:02.470 "config": [ 00:35:02.470 { 00:35:02.470 "method": "sock_set_default_impl", 00:35:02.470 "params": { 00:35:02.470 "impl_name": "posix" 00:35:02.470 } 00:35:02.470 }, 00:35:02.470 { 00:35:02.470 "method": "sock_impl_set_options", 00:35:02.470 "params": { 00:35:02.470 "impl_name": "ssl", 00:35:02.470 "recv_buf_size": 4096, 00:35:02.470 "send_buf_size": 4096, 00:35:02.470 "enable_recv_pipe": true, 00:35:02.470 "enable_quickack": false, 00:35:02.470 "enable_placement_id": 0, 00:35:02.470 "enable_zerocopy_send_server": true, 00:35:02.470 "enable_zerocopy_send_client": false, 00:35:02.470 "zerocopy_threshold": 0, 00:35:02.470 "tls_version": 0, 00:35:02.470 "enable_ktls": false 00:35:02.470 } 00:35:02.470 }, 00:35:02.470 { 00:35:02.470 "method": "sock_impl_set_options", 00:35:02.470 "params": { 00:35:02.470 "impl_name": "posix", 00:35:02.470 "recv_buf_size": 2097152, 00:35:02.470 "send_buf_size": 2097152, 00:35:02.470 "enable_recv_pipe": true, 00:35:02.470 "enable_quickack": false, 00:35:02.470 "enable_placement_id": 0, 00:35:02.470 "enable_zerocopy_send_server": true, 00:35:02.470 "enable_zerocopy_send_client": false, 00:35:02.471 "zerocopy_threshold": 0, 00:35:02.471 "tls_version": 0, 00:35:02.471 "enable_ktls": false 00:35:02.471 } 00:35:02.471 } 00:35:02.471 ] 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "subsystem": "vmd", 00:35:02.471 "config": [] 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "subsystem": "accel", 00:35:02.471 "config": [ 00:35:02.471 { 00:35:02.471 "method": "accel_set_options", 00:35:02.471 "params": { 00:35:02.471 "small_cache_size": 128, 00:35:02.471 "large_cache_size": 16, 00:35:02.471 "task_count": 2048, 00:35:02.471 "sequence_count": 2048, 00:35:02.471 "buf_count": 2048 00:35:02.471 } 00:35:02.471 } 00:35:02.471 ] 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "subsystem": "bdev", 00:35:02.471 "config": [ 00:35:02.471 { 00:35:02.471 "method": "bdev_set_options", 00:35:02.471 "params": { 00:35:02.471 "bdev_io_pool_size": 65535, 00:35:02.471 "bdev_io_cache_size": 256, 00:35:02.471 "bdev_auto_examine": true, 00:35:02.471 "iobuf_small_cache_size": 128, 00:35:02.471 "iobuf_large_cache_size": 16 00:35:02.471 } 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "method": "bdev_raid_set_options", 00:35:02.471 "params": { 00:35:02.471 "process_window_size_kb": 1024, 00:35:02.471 "process_max_bandwidth_mb_sec": 0 00:35:02.471 } 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "method": "bdev_iscsi_set_options", 00:35:02.471 "params": { 00:35:02.471 "timeout_sec": 30 00:35:02.471 } 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "method": "bdev_nvme_set_options", 00:35:02.471 "params": { 00:35:02.471 "action_on_timeout": "none", 00:35:02.471 "timeout_us": 0, 00:35:02.471 "timeout_admin_us": 0, 00:35:02.471 "keep_alive_timeout_ms": 10000, 00:35:02.471 "arbitration_burst": 0, 00:35:02.471 "low_priority_weight": 0, 00:35:02.471 "medium_priority_weight": 0, 00:35:02.471 "high_priority_weight": 0, 00:35:02.471 "nvme_adminq_poll_period_us": 10000, 00:35:02.471 "nvme_ioq_poll_period_us": 0, 00:35:02.471 "io_queue_requests": 512, 00:35:02.471 "delay_cmd_submit": true, 00:35:02.471 "transport_retry_count": 4, 00:35:02.471 "bdev_retry_count": 3, 00:35:02.471 "transport_ack_timeout": 0, 00:35:02.471 "ctrlr_loss_timeout_sec": 0, 00:35:02.471 "reconnect_delay_sec": 0, 00:35:02.471 "fast_io_fail_timeout_sec": 0, 00:35:02.471 "disable_auto_failback": false, 00:35:02.471 "generate_uuids": false, 00:35:02.471 "transport_tos": 0, 00:35:02.471 "nvme_error_stat": false, 00:35:02.471 "rdma_srq_size": 0, 00:35:02.471 "io_path_stat": false, 00:35:02.471 "allow_accel_sequence": false, 00:35:02.471 "rdma_max_cq_size": 0, 00:35:02.471 "rdma_cm_event_timeout_ms": 0, 00:35:02.471 "dhchap_digests": [ 00:35:02.471 "sha256", 00:35:02.471 "sha384", 00:35:02.471 "sha512" 00:35:02.471 ], 00:35:02.471 "dhchap_dhgroups": [ 00:35:02.471 "null", 00:35:02.471 "ffdhe2048", 00:35:02.471 "ffdhe3072", 00:35:02.471 "ffdhe4096", 00:35:02.471 "ffdhe6144", 00:35:02.471 "ffdhe8192" 00:35:02.471 ], 00:35:02.471 "rdma_umr_per_io": false 00:35:02.471 } 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "method": "bdev_nvme_attach_controller", 00:35:02.471 "params": { 00:35:02.471 "name": "nvme0", 00:35:02.471 "trtype": "TCP", 00:35:02.471 "adrfam": "IPv4", 00:35:02.471 "traddr": "127.0.0.1", 00:35:02.471 "trsvcid": "4420", 00:35:02.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:02.471 "prchk_reftag": false, 00:35:02.471 "prchk_guard": false, 00:35:02.471 "ctrlr_loss_timeout_sec": 0, 00:35:02.471 "reconnect_delay_sec": 0, 00:35:02.471 "fast_io_fail_timeout_sec": 0, 00:35:02.471 "psk": "key0", 00:35:02.471 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:02.471 "hdgst": false, 00:35:02.471 "ddgst": false, 00:35:02.471 "multipath": "multipath" 00:35:02.471 } 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "method": "bdev_nvme_set_hotplug", 00:35:02.471 "params": { 00:35:02.471 "period_us": 100000, 00:35:02.471 "enable": false 00:35:02.471 } 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "method": "bdev_wait_for_examine" 00:35:02.471 } 00:35:02.471 ] 00:35:02.471 }, 00:35:02.471 { 00:35:02.471 "subsystem": "nbd", 00:35:02.471 "config": [] 00:35:02.471 } 00:35:02.471 ] 00:35:02.471 }' 00:35:02.471 10:48:36 keyring_file -- keyring/file.sh@115 -- # killprocess 1793526 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1793526 ']' 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1793526 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1793526 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1793526' 00:35:02.471 killing process with pid 1793526 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@973 -- # kill 1793526 00:35:02.471 Received shutdown signal, test time was about 1.000000 seconds 00:35:02.471 00:35:02.471 Latency(us) 00:35:02.471 [2024-12-12T09:48:36.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.471 [2024-12-12T09:48:36.494Z] =================================================================================================================== 00:35:02.471 [2024-12-12T09:48:36.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:02.471 10:48:36 keyring_file -- common/autotest_common.sh@978 -- # wait 1793526 00:35:02.731 10:48:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=1795012 00:35:02.731 10:48:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1795012 /var/tmp/bperf.sock 00:35:02.731 10:48:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1795012 ']' 00:35:02.731 10:48:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.731 10:48:36 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:02.731 10:48:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.731 10:48:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.731 10:48:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:02.731 "subsystems": [ 00:35:02.731 { 00:35:02.731 "subsystem": "keyring", 00:35:02.731 "config": [ 00:35:02.731 { 00:35:02.731 "method": "keyring_file_add_key", 00:35:02.731 "params": { 00:35:02.731 "name": "key0", 00:35:02.731 "path": "/tmp/tmp.yXWSBGZvUN" 00:35:02.731 } 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "method": "keyring_file_add_key", 00:35:02.731 "params": { 00:35:02.731 "name": "key1", 00:35:02.731 "path": "/tmp/tmp.8KBN5c7248" 00:35:02.731 } 00:35:02.731 } 00:35:02.731 ] 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "subsystem": "iobuf", 00:35:02.731 "config": [ 00:35:02.731 { 00:35:02.731 "method": "iobuf_set_options", 00:35:02.731 "params": { 00:35:02.731 "small_pool_count": 8192, 00:35:02.731 "large_pool_count": 1024, 00:35:02.731 "small_bufsize": 8192, 00:35:02.731 "large_bufsize": 135168, 00:35:02.731 "enable_numa": false 00:35:02.731 } 00:35:02.731 } 00:35:02.731 ] 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "subsystem": "sock", 00:35:02.731 "config": [ 00:35:02.731 { 00:35:02.731 "method": "sock_set_default_impl", 00:35:02.731 "params": { 00:35:02.731 "impl_name": "posix" 00:35:02.731 } 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "method": "sock_impl_set_options", 00:35:02.731 "params": { 00:35:02.731 "impl_name": "ssl", 00:35:02.731 "recv_buf_size": 4096, 00:35:02.731 "send_buf_size": 4096, 00:35:02.731 "enable_recv_pipe": true, 00:35:02.731 "enable_quickack": false, 00:35:02.731 "enable_placement_id": 0, 00:35:02.731 "enable_zerocopy_send_server": true, 00:35:02.731 "enable_zerocopy_send_client": false, 00:35:02.731 "zerocopy_threshold": 0, 00:35:02.731 "tls_version": 0, 00:35:02.731 "enable_ktls": false 00:35:02.731 } 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "method": "sock_impl_set_options", 00:35:02.731 "params": { 00:35:02.731 "impl_name": "posix", 00:35:02.731 "recv_buf_size": 2097152, 00:35:02.731 "send_buf_size": 2097152, 00:35:02.731 "enable_recv_pipe": true, 00:35:02.731 "enable_quickack": false, 00:35:02.731 "enable_placement_id": 0, 00:35:02.731 "enable_zerocopy_send_server": true, 00:35:02.731 "enable_zerocopy_send_client": false, 00:35:02.731 "zerocopy_threshold": 0, 00:35:02.731 "tls_version": 0, 00:35:02.731 "enable_ktls": false 00:35:02.731 } 00:35:02.731 } 00:35:02.731 ] 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "subsystem": "vmd", 00:35:02.731 "config": [] 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "subsystem": "accel", 00:35:02.731 "config": [ 00:35:02.731 { 00:35:02.731 "method": "accel_set_options", 00:35:02.731 "params": { 00:35:02.731 "small_cache_size": 128, 00:35:02.731 "large_cache_size": 16, 00:35:02.731 "task_count": 2048, 00:35:02.731 "sequence_count": 2048, 00:35:02.731 "buf_count": 2048 00:35:02.731 } 00:35:02.731 } 00:35:02.731 ] 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "subsystem": "bdev", 00:35:02.731 "config": [ 00:35:02.731 { 00:35:02.731 "method": "bdev_set_options", 00:35:02.731 "params": { 00:35:02.731 "bdev_io_pool_size": 65535, 00:35:02.731 "bdev_io_cache_size": 256, 00:35:02.731 "bdev_auto_examine": true, 00:35:02.731 "iobuf_small_cache_size": 128, 00:35:02.731 "iobuf_large_cache_size": 16 00:35:02.731 } 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "method": "bdev_raid_set_options", 00:35:02.731 "params": { 00:35:02.731 "process_window_size_kb": 1024, 00:35:02.731 "process_max_bandwidth_mb_sec": 0 00:35:02.731 } 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "method": "bdev_iscsi_set_options", 00:35:02.731 "params": { 00:35:02.731 "timeout_sec": 30 00:35:02.731 } 00:35:02.731 }, 00:35:02.731 { 00:35:02.731 "method": "bdev_nvme_set_options", 00:35:02.731 "params": { 00:35:02.731 "action_on_timeout": "none", 00:35:02.731 "timeout_us": 0, 00:35:02.731 "timeout_admin_us": 0, 00:35:02.731 "keep_alive_timeout_ms": 10000, 00:35:02.731 "arbitration_burst": 0, 00:35:02.731 "low_priority_weight": 0, 00:35:02.731 "medium_priority_weight": 0, 00:35:02.731 "high_priority_weight": 0, 00:35:02.731 "nvme_adminq_poll_period_us": 10000, 00:35:02.731 "nvme_ioq_poll_period_us": 0, 00:35:02.731 "io_queue_requests": 512, 00:35:02.731 "delay_cmd_submit": true, 00:35:02.731 "transport_retry_count": 4, 00:35:02.731 "bdev_retry_count": 3, 00:35:02.731 "transport_ack_timeout": 0, 00:35:02.731 "ctrlr_loss_timeout_sec": 0, 00:35:02.731 "reconnect_delay_sec": 0, 00:35:02.731 "fast_io_fail_timeout_sec": 0, 00:35:02.731 "disable_auto_failback": false, 00:35:02.731 "generate_uuids": false, 00:35:02.731 "transport_tos": 0, 00:35:02.731 "nvme_error_stat": false, 00:35:02.731 "rdma_srq_size": 0, 00:35:02.731 "io_path_stat": false, 00:35:02.731 "allow_accel_sequence": false, 00:35:02.731 "rdma_max_cq_size": 0, 00:35:02.731 "rdma_cm_event_timeout_ms": 0, 00:35:02.731 "dhchap_digests": [ 00:35:02.731 "sha256", 00:35:02.731 "sha384", 00:35:02.732 "sha512" 00:35:02.732 ], 00:35:02.732 "dhchap_dhgroups": [ 00:35:02.732 "null", 00:35:02.732 "ffdhe2048", 00:35:02.732 "ffdhe3072", 00:35:02.732 "ffdhe4096", 00:35:02.732 "ffdhe6144", 00:35:02.732 "ffdhe8192" 00:35:02.732 ], 00:35:02.732 "rdma_umr_per_io": false 00:35:02.732 } 00:35:02.732 }, 00:35:02.732 { 00:35:02.732 "method": "bdev_nvme_attach_controller", 00:35:02.732 "params": { 00:35:02.732 "name": "nvme0", 00:35:02.732 "trtype": "TCP", 00:35:02.732 "adrfam": "IPv4", 00:35:02.732 "traddr": "127.0.0.1", 00:35:02.732 "trsvcid": "4420", 00:35:02.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:02.732 "prchk_reftag": false, 00:35:02.732 "prchk_guard": false, 00:35:02.732 "ctrlr_loss_timeout_sec": 0, 00:35:02.732 "reconnect_delay_sec": 0, 00:35:02.732 "fast_io_fail_timeout_sec": 0, 00:35:02.732 "psk": "key0", 00:35:02.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:02.732 "hdgst": false, 00:35:02.732 "ddgst": false, 00:35:02.732 "multipath": "multipath" 00:35:02.732 } 00:35:02.732 }, 00:35:02.732 { 00:35:02.732 "method": "bdev_nvme_set_hotplug", 00:35:02.732 "params": { 00:35:02.732 "period_us": 100000, 00:35:02.732 "enable": false 00:35:02.732 } 00:35:02.732 }, 00:35:02.732 { 00:35:02.732 "method": "bdev_wait_for_examine" 00:35:02.732 } 00:35:02.732 ] 00:35:02.732 }, 00:35:02.732 { 00:35:02.732 "subsystem": "nbd", 00:35:02.732 "config": [] 00:35:02.732 } 00:35:02.732 ] 00:35:02.732 }' 00:35:02.732 10:48:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.732 10:48:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:02.732 [2024-12-12 10:48:36.539623] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:35:02.732 [2024-12-12 10:48:36.539674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795012 ] 00:35:02.732 [2024-12-12 10:48:36.614326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.732 [2024-12-12 10:48:36.653475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.991 [2024-12-12 10:48:36.814411] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:03.558 10:48:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.558 10:48:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:03.558 10:48:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:03.558 10:48:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:03.558 10:48:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.817 10:48:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:03.817 10:48:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.817 10:48:37 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:03.817 10:48:37 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:03.817 10:48:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.076 10:48:37 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:04.076 10:48:37 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:04.076 10:48:37 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:04.076 10:48:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:04.335 10:48:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:04.335 10:48:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:04.335 10:48:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.yXWSBGZvUN /tmp/tmp.8KBN5c7248 00:35:04.335 10:48:38 keyring_file -- keyring/file.sh@20 -- # killprocess 1795012 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1795012 ']' 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1795012 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795012 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795012' 00:35:04.335 killing process with pid 1795012 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@973 -- # kill 1795012 00:35:04.335 Received shutdown signal, test time was about 1.000000 seconds 00:35:04.335 00:35:04.335 Latency(us) 00:35:04.335 [2024-12-12T09:48:38.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.335 [2024-12-12T09:48:38.358Z] =================================================================================================================== 00:35:04.335 [2024-12-12T09:48:38.358Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:04.335 10:48:38 keyring_file -- common/autotest_common.sh@978 -- # wait 1795012 00:35:04.594 10:48:38 keyring_file -- keyring/file.sh@21 -- # killprocess 1793515 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1793515 ']' 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1793515 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1793515 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1793515' 00:35:04.594 killing process with pid 1793515 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@973 -- # kill 1793515 00:35:04.594 10:48:38 keyring_file -- common/autotest_common.sh@978 -- # wait 1793515 00:35:04.854 00:35:04.854 real 0m11.628s 00:35:04.854 user 0m28.863s 00:35:04.854 sys 0m2.674s 00:35:04.854 10:48:38 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:04.854 10:48:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:04.854 ************************************ 00:35:04.854 END TEST keyring_file 00:35:04.854 ************************************ 00:35:04.854 10:48:38 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:04.854 10:48:38 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:04.855 10:48:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:04.855 10:48:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:04.855 10:48:38 -- common/autotest_common.sh@10 -- # set +x 00:35:04.855 ************************************ 00:35:04.855 START TEST keyring_linux 00:35:04.855 ************************************ 00:35:04.855 10:48:38 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:04.855 Joined session keyring: 58626981 00:35:05.115 * Looking for test storage... 00:35:05.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:05.115 10:48:38 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:05.115 10:48:38 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:05.115 10:48:38 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:05.115 10:48:38 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.115 10:48:38 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:05.115 10:48:38 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.115 10:48:38 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:05.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.115 --rc genhtml_branch_coverage=1 00:35:05.115 --rc genhtml_function_coverage=1 00:35:05.115 --rc genhtml_legend=1 00:35:05.115 --rc geninfo_all_blocks=1 00:35:05.115 --rc geninfo_unexecuted_blocks=1 00:35:05.115 00:35:05.115 ' 00:35:05.115 10:48:38 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:05.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.115 --rc genhtml_branch_coverage=1 00:35:05.115 --rc genhtml_function_coverage=1 00:35:05.115 --rc genhtml_legend=1 00:35:05.115 --rc geninfo_all_blocks=1 00:35:05.115 --rc geninfo_unexecuted_blocks=1 00:35:05.115 00:35:05.115 ' 00:35:05.115 10:48:38 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:05.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.115 --rc genhtml_branch_coverage=1 00:35:05.115 --rc genhtml_function_coverage=1 00:35:05.115 --rc genhtml_legend=1 00:35:05.115 --rc geninfo_all_blocks=1 00:35:05.115 --rc geninfo_unexecuted_blocks=1 00:35:05.115 00:35:05.115 ' 00:35:05.115 10:48:38 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:05.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.115 --rc genhtml_branch_coverage=1 00:35:05.115 --rc genhtml_function_coverage=1 00:35:05.115 --rc genhtml_legend=1 00:35:05.115 --rc geninfo_all_blocks=1 00:35:05.115 --rc geninfo_unexecuted_blocks=1 00:35:05.115 00:35:05.115 ' 00:35:05.115 10:48:38 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:05.115 10:48:38 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.115 10:48:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:05.115 10:48:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.115 10:48:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.115 10:48:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.116 10:48:38 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.116 10:48:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.116 10:48:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.116 10:48:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.116 10:48:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.116 10:48:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.116 10:48:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.116 10:48:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.116 10:48:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:05.116 10:48:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:05.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:05.116 /tmp/:spdk-test:key0 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:05.116 10:48:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:05.116 10:48:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:05.116 /tmp/:spdk-test:key1 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1795548 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1795548 00:35:05.116 10:48:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:05.116 10:48:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1795548 ']' 00:35:05.116 10:48:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.116 10:48:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.116 10:48:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.116 10:48:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.116 10:48:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:05.376 [2024-12-12 10:48:39.169838] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:35:05.376 [2024-12-12 10:48:39.169885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795548 ] 00:35:05.376 [2024-12-12 10:48:39.243104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.376 [2024-12-12 10:48:39.281804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.312 10:48:39 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.312 10:48:39 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:06.312 10:48:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:06.312 10:48:39 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.312 10:48:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:06.312 [2024-12-12 10:48:39.992535] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.312 null0 00:35:06.312 [2024-12-12 10:48:40.024594] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:06.312 [2024-12-12 10:48:40.024881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:06.312 10:48:40 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.312 10:48:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:06.312 913690785 00:35:06.312 10:48:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:06.312 667683120 00:35:06.312 10:48:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1795776 00:35:06.312 10:48:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1795776 /var/tmp/bperf.sock 00:35:06.312 10:48:40 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:06.312 10:48:40 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1795776 ']' 00:35:06.312 10:48:40 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:06.312 10:48:40 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.312 10:48:40 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:06.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:06.312 10:48:40 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.312 10:48:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:06.312 [2024-12-12 10:48:40.095256] Starting SPDK v25.01-pre git sha1 b9cf27559 / DPDK 24.03.0 initialization... 00:35:06.312 [2024-12-12 10:48:40.095299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795776 ] 00:35:06.312 [2024-12-12 10:48:40.154612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.312 [2024-12-12 10:48:40.196083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.312 10:48:40 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.312 10:48:40 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:06.312 10:48:40 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:06.312 10:48:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:06.571 10:48:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:06.571 10:48:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:06.829 10:48:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:06.829 10:48:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:06.829 [2024-12-12 10:48:40.848082] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:07.088 nvme0n1 00:35:07.088 10:48:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:07.088 10:48:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:07.088 10:48:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:07.088 10:48:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:07.088 10:48:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:07.088 10:48:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:07.347 10:48:41 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:07.347 10:48:41 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.347 10:48:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@25 -- # sn=913690785 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 913690785 == \9\1\3\6\9\0\7\8\5 ]] 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 913690785 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:07.347 10:48:41 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:07.605 Running I/O for 1 seconds... 00:35:08.539 21634.00 IOPS, 84.51 MiB/s 00:35:08.539 Latency(us) 00:35:08.539 [2024-12-12T09:48:42.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.539 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:08.539 nvme0n1 : 1.01 21634.81 84.51 0.00 0.00 5897.02 4649.94 14043.43 00:35:08.539 [2024-12-12T09:48:42.562Z] =================================================================================================================== 00:35:08.539 [2024-12-12T09:48:42.562Z] Total : 21634.81 84.51 0.00 0.00 5897.02 4649.94 14043.43 00:35:08.539 { 00:35:08.539 "results": [ 00:35:08.539 { 00:35:08.539 "job": "nvme0n1", 00:35:08.539 "core_mask": "0x2", 00:35:08.539 "workload": "randread", 00:35:08.539 "status": "finished", 00:35:08.539 "queue_depth": 128, 00:35:08.539 "io_size": 4096, 00:35:08.539 "runtime": 1.005925, 00:35:08.539 "iops": 21634.813728657704, 00:35:08.539 "mibps": 84.51099112756916, 00:35:08.539 "io_failed": 0, 00:35:08.539 "io_timeout": 0, 00:35:08.539 "avg_latency_us": 5897.023271388967, 00:35:08.539 "min_latency_us": 4649.935238095238, 00:35:08.539 "max_latency_us": 14043.42857142857 00:35:08.539 } 00:35:08.539 ], 00:35:08.539 "core_count": 1 00:35:08.539 } 00:35:08.539 10:48:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:08.539 10:48:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:08.798 10:48:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:08.798 10:48:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:08.798 10:48:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:08.798 10:48:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:08.798 10:48:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:08.798 10:48:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.056 10:48:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:09.056 10:48:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:09.056 10:48:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:09.056 10:48:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:09.056 10:48:42 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:09.056 10:48:42 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:09.056 10:48:42 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:09.056 10:48:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.056 10:48:42 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:09.056 10:48:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.056 10:48:42 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:09.056 10:48:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:09.056 [2024-12-12 10:48:43.034631] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:09.056 [2024-12-12 10:48:43.035389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5c220 (107): Transport endpoint is not connected 00:35:09.056 [2024-12-12 10:48:43.036385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5c220 (9): Bad file descriptor 00:35:09.056 [2024-12-12 10:48:43.037386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:09.056 [2024-12-12 10:48:43.037397] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:09.056 [2024-12-12 10:48:43.037404] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:09.056 [2024-12-12 10:48:43.037413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:09.056 request: 00:35:09.056 { 00:35:09.056 "name": "nvme0", 00:35:09.056 "trtype": "tcp", 00:35:09.056 "traddr": "127.0.0.1", 00:35:09.056 "adrfam": "ipv4", 00:35:09.056 "trsvcid": "4420", 00:35:09.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:09.056 "prchk_reftag": false, 00:35:09.056 "prchk_guard": false, 00:35:09.056 "hdgst": false, 00:35:09.056 "ddgst": false, 00:35:09.056 "psk": ":spdk-test:key1", 00:35:09.056 "allow_unrecognized_csi": false, 00:35:09.056 "method": "bdev_nvme_attach_controller", 00:35:09.056 "req_id": 1 00:35:09.056 } 00:35:09.056 Got JSON-RPC error response 00:35:09.056 response: 00:35:09.056 { 00:35:09.056 "code": -5, 00:35:09.056 "message": "Input/output error" 00:35:09.056 } 00:35:09.056 10:48:43 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:09.056 10:48:43 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:09.056 10:48:43 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:09.056 10:48:43 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@33 -- # sn=913690785 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 913690785 00:35:09.056 1 links removed 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:09.056 10:48:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:09.315 10:48:43 keyring_linux -- keyring/linux.sh@33 -- # sn=667683120 00:35:09.315 10:48:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 667683120 00:35:09.315 1 links removed 00:35:09.315 10:48:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1795776 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1795776 ']' 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1795776 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795776 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795776' 00:35:09.315 killing process with pid 1795776 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@973 -- # kill 1795776 00:35:09.315 Received shutdown signal, test time was about 1.000000 seconds 00:35:09.315 00:35:09.315 Latency(us) 00:35:09.315 [2024-12-12T09:48:43.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.315 [2024-12-12T09:48:43.338Z] =================================================================================================================== 00:35:09.315 [2024-12-12T09:48:43.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@978 -- # wait 1795776 00:35:09.315 10:48:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1795548 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1795548 ']' 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1795548 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795548 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795548' 00:35:09.315 killing process with pid 1795548 00:35:09.315 10:48:43 keyring_linux -- common/autotest_common.sh@973 -- # kill 1795548 00:35:09.574 10:48:43 keyring_linux -- common/autotest_common.sh@978 -- # wait 1795548 00:35:09.833 00:35:09.833 real 0m4.840s 00:35:09.833 user 0m8.809s 00:35:09.833 sys 0m1.460s 00:35:09.833 10:48:43 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:09.833 10:48:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:09.833 ************************************ 00:35:09.833 END TEST keyring_linux 00:35:09.833 ************************************ 00:35:09.833 10:48:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:09.833 10:48:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:09.833 10:48:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:09.833 10:48:43 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:09.833 10:48:43 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:09.833 10:48:43 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:09.833 10:48:43 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:09.833 10:48:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:09.833 10:48:43 -- common/autotest_common.sh@10 -- # set +x 00:35:09.833 10:48:43 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:09.833 10:48:43 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:09.833 10:48:43 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:09.833 10:48:43 -- common/autotest_common.sh@10 -- # set +x 00:35:15.117 INFO: APP EXITING 00:35:15.117 INFO: killing all VMs 00:35:15.117 INFO: killing vhost app 00:35:15.117 INFO: EXIT DONE 00:35:17.656 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:17.656 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:17.656 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:17.915 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:18.175 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:18.175 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:20.712 Cleaning 00:35:20.712 Removing: /var/run/dpdk/spdk0/config 00:35:20.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:20.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:20.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:20.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:20.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:20.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:20.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:20.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:20.971 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:20.971 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:20.971 Removing: /var/run/dpdk/spdk1/config 00:35:20.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:20.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:20.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:20.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:20.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:20.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:20.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:20.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:20.971 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:20.971 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:20.971 Removing: /var/run/dpdk/spdk2/config 00:35:20.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:20.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:20.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:20.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:20.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:20.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:20.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:20.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:20.971 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:20.971 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:20.971 Removing: /var/run/dpdk/spdk3/config 00:35:20.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:20.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:20.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:20.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:20.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:20.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:20.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:20.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:20.971 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:20.971 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:20.971 Removing: /var/run/dpdk/spdk4/config 00:35:20.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:20.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:20.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:20.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:20.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:20.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:20.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:20.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:20.971 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:20.971 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:20.971 Removing: /dev/shm/bdev_svc_trace.1 00:35:20.971 Removing: /dev/shm/nvmf_trace.0 00:35:20.971 Removing: /dev/shm/spdk_tgt_trace.pid1322154 00:35:20.971 Removing: /var/run/dpdk/spdk0 00:35:20.971 Removing: /var/run/dpdk/spdk1 00:35:20.971 Removing: /var/run/dpdk/spdk2 00:35:20.971 Removing: /var/run/dpdk/spdk3 00:35:20.971 Removing: /var/run/dpdk/spdk4 00:35:20.971 Removing: /var/run/dpdk/spdk_pid1320061 00:35:20.971 Removing: /var/run/dpdk/spdk_pid1321099 00:35:20.971 Removing: /var/run/dpdk/spdk_pid1322154 00:35:20.971 Removing: /var/run/dpdk/spdk_pid1322777 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1323698 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1323873 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1324874 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1324890 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1325236 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1326720 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1327960 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1328244 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1328595 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1328834 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1329118 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1329362 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1329610 00:35:21.230 Removing: /var/run/dpdk/spdk_pid1329892 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1330611 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1333791 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1333917 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1334199 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1334393 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1335058 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1335168 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1335546 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1335728 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1336020 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1336025 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1336275 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1336410 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1336851 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1337093 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1337387 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1341244 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1345431 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1355446 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1356033 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1360317 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1360567 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1364769 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1370676 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1373265 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1383965 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1392755 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1394545 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1395451 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1412210 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1416208 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1461193 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1466454 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1472191 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1478553 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1478559 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1479568 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1480459 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1481742 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1482200 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1482208 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1482501 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1482656 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1482661 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1483549 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1484433 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1485333 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1485789 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1485793 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1486056 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1487220 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1488180 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1496316 00:35:21.231 Removing: /var/run/dpdk/spdk_pid1524876 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1529393 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1530955 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1532741 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1532848 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1532986 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1533213 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1533703 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1535459 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1536236 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1536720 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1538772 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1539339 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1539959 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1544148 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1549601 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1549603 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1549605 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1553483 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1562327 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1566468 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1572563 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1573840 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1575133 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1576428 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1581037 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1585290 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1589241 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1596490 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1596557 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1601119 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1601348 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1601573 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1602011 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1602022 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1606922 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1607491 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1611964 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1614514 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1619907 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1625164 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1633775 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1640870 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1640872 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1660047 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1660688 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1661147 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1661654 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1662330 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1662952 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1663472 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1663934 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1668101 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1668326 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1674276 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1674427 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1679772 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1684040 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1693571 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1694209 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1698724 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1699182 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1703360 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1709035 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1711596 00:35:21.490 Removing: /var/run/dpdk/spdk_pid1721543 00:35:21.749 Removing: /var/run/dpdk/spdk_pid1730086 00:35:21.749 Removing: /var/run/dpdk/spdk_pid1731856 00:35:21.749 Removing: /var/run/dpdk/spdk_pid1732748 00:35:21.749 Removing: /var/run/dpdk/spdk_pid1749093 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1752869 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1755665 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1763236 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1763286 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1768397 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1770310 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1772233 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1773258 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1775222 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1776426 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1784999 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1785449 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1785895 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1788357 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1789262 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1789753 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1793515 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1793526 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1795012 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1795548 00:35:21.750 Removing: /var/run/dpdk/spdk_pid1795776 00:35:21.750 Clean 00:35:21.750 10:48:55 -- common/autotest_common.sh@1453 -- # return 0 00:35:21.750 10:48:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:21.750 10:48:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:21.750 10:48:55 -- common/autotest_common.sh@10 -- # set +x 00:35:21.750 10:48:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:21.750 10:48:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:21.750 10:48:55 -- common/autotest_common.sh@10 -- # set +x 00:35:21.750 10:48:55 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:21.750 10:48:55 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:21.750 10:48:55 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:22.009 10:48:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:22.009 10:48:55 -- spdk/autotest.sh@398 -- # hostname 00:35:22.009 10:48:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:22.009 geninfo: WARNING: invalid characters removed from testname! 00:35:43.949 10:49:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:45.329 10:49:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:47.235 10:49:21 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:49.142 10:49:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:51.048 10:49:24 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:53.012 10:49:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.931 10:49:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:54.931 10:49:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:54.931 10:49:28 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:54.931 10:49:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:54.931 10:49:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:54.931 10:49:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:54.931 + [[ -n 1242055 ]] 00:35:54.931 + sudo kill 1242055 00:35:54.942 [Pipeline] } 00:35:54.957 [Pipeline] // stage 00:35:54.962 [Pipeline] } 00:35:54.976 [Pipeline] // timeout 00:35:54.981 [Pipeline] } 00:35:54.996 [Pipeline] // catchError 00:35:55.001 [Pipeline] } 00:35:55.015 [Pipeline] // wrap 00:35:55.022 [Pipeline] } 00:35:55.035 [Pipeline] // catchError 00:35:55.042 [Pipeline] stage 00:35:55.044 [Pipeline] { (Epilogue) 00:35:55.055 [Pipeline] catchError 00:35:55.056 [Pipeline] { 00:35:55.065 [Pipeline] echo 00:35:55.067 Cleanup processes 00:35:55.071 [Pipeline] sh 00:35:55.357 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:55.357 1806535 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:55.371 [Pipeline] sh 00:35:55.656 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:55.656 ++ grep -v 'sudo pgrep' 00:35:55.656 ++ awk '{print $1}' 00:35:55.656 + sudo kill -9 00:35:55.656 + true 00:35:55.668 [Pipeline] sh 00:35:55.953 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:08.185 [Pipeline] sh 00:36:08.469 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:08.469 Artifacts sizes are good 00:36:08.484 [Pipeline] archiveArtifacts 00:36:08.493 Archiving artifacts 00:36:08.612 [Pipeline] sh 00:36:08.898 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:08.913 [Pipeline] cleanWs 00:36:08.923 [WS-CLEANUP] Deleting project workspace... 00:36:08.923 [WS-CLEANUP] Deferred wipeout is used... 00:36:08.929 [WS-CLEANUP] done 00:36:08.931 [Pipeline] } 00:36:08.947 [Pipeline] // catchError 00:36:08.958 [Pipeline] sh 00:36:09.245 + logger -p user.info -t JENKINS-CI 00:36:09.257 [Pipeline] } 00:36:09.269 [Pipeline] // stage 00:36:09.274 [Pipeline] } 00:36:09.286 [Pipeline] // node 00:36:09.291 [Pipeline] End of Pipeline 00:36:09.317 Finished: SUCCESS